“We can‘t be essential unless our partners are skilled in our products and confident in going to their clients with our products and selling them with us and for IBM,” IBM channel chief Kate Woolley said.
IBM has started giving registered members of its PartnerWorld program access to the training, badges and enablement IBM sales employees get along with a new learning hub for accessing materials.
The expansion is part of the Armonk, N.Y.-based tech giant’s investment in its partner program, IBM channel chief Kate Woolley told CRN in an interview.
“We can‘t be essential unless our partners are skilled in our products and confident in going to their clients with our products and selling them with us and for IBM,” said Woolley (pictured), general manager of the IBM ecosystem.
[RELATED: Channel Chief Kate Woolley: ‘No Better Time To Be An IBM Partner’]
Partners now have access to sales and technical badges showing industry expertise, according to a blog post Tuesday. Badges are shareable on LinkedIn and other professional social platforms. IBM sales representatives and partners will receive new content at the same time as it becomes available.
“This is the next step in that journey in terms of making sure that all of our registered partners have access to all of the same training, all of the same enablement materials as IBMers,” Woolley told CRN. “That’s the big message that we want people to hear. And then also in line with continuing to make it easier to do business with IBM, this has all been done through a much improved digital experience in terms of how our partners are able to access and consume.”
Among the materials available to IBM partners are scripts for sales demonstrations, templates for sales presentations and positioning offerings compared to competitors, white papers, analyst reports and solution briefs. Skilling and enablement materials are available through a new learning hub IBM has launched.
“The partners are telling us they want more expertise on their teams in terms of the IBM products that they‘re able to sell and how equipped they are to sell them,” Woolley said. “And as we look at what we’re hearing from clients as well, clients want that. … Our clients are saying, ‘We want more technical expertise. We want more experiential selling. We want IBM’ – and that means the IBM ecosystem as well – ‘to have all of that expertise and to have access to all the right enablement material to be able to engage with us as clients.’”
The company has doubled the number of brand-specialized partner sellers in the ecosystem and increased the number of technical partner sellers by more than 35 percent, according to IBM.
The company’s latest program changes have led to improved deal registration and introduced to partners more than 7,000 potential deals valued at more than $500 million globally, according to IBM. Those numbers are based on IBM sales data from January 2022 to August.
Along with the expanded access to training and enablement resources, Woolley told CRN that another example of aligning the IBM sales force and partners was a single sales kickoff event for employees and partners. A year ago, two separate events were held.
“I want our partners to continue to feel and see this as a big investment in them and representative of how focused we are on the ecosystem and how invested we are,” she said.
Understanding complex events in today’s business world is critical. The explosion of data analytics—and the desire for insights and knowledge—represents both an opportunity and a challenge.
At the center of this effort is predictive analytics. The ability to transform raw data into insights and make informed business decisions is crucial. Today, predictive analytics plays a role in almost every corner of the enterprise, from finance and marketing to operations and cybersecurity. It involves pulling data from legacy databases, data lakes, clouds, social media sites, point of sale terminals and IoT on the edge.
It’s critical to select a predictive analytics platform that generates actionable information. For example, financial institutions use predictive analytics to understand loan and credit card applications, and even grant a line of credit on the spot. Operations departments use predictive analytics to understand maintenance and repairs for equipment and vehicles, and marketing and sales use it to gauge interest in a new product.
Top predictive analytics platforms deliver powerful tools for ingesting and exporting data, processing it and delivering reports and visualizations that guide enterprise decision-making. They also support more advanced capabilities, such as machine learning (ML), deep learning (DL), artificial intelligence (AI) and even digital twins. Many solutions also provide robust tools for sharing visualizations and reports.
Predictive models are designed to deliver insights that guide enterprise decision-making. Solutions incorporate techniques such as data mining, statistical analysis, machine learning and AI. They are typically used for optimizing marketing, sales and operations; improving profits and reducing costs; and reducing risks related to things like security and climate change.
Also see: Best Data Analytics Tools
Four major types of predictive models exist:
A regression model estimates the relationship between different variables to deliver information and insight about future scenarios and impacts. It is sometimes referred to as “what if” analysis.
For example, a food manufacturer might study how different ingredients impact quality and sales. A clothing manufacturer might analyze how different colors increase or decrease the likelihood of a purchase. Models can incorporate correlations (relationships) and causality (reasons).
These predictive models place data and information in categories based on past histories and historical knowledge. The data is labeled, and an algorithm learns the correlations. The model can then be updated with new data, as it arrives. These models are commonly used for fraud detection and to identify cybersecurity attacks.
A cluster model assembles data into groups, based on common attributes and characteristics. It often spots hidden patterns in systems. In a factory, this might mean spotting misplaced supplies and equipment and then using the data to predict where it will be during a typical workday. In retail, a store might send out marketing materials to a specific group, based on a combination of factors such as income, occupation and distance.
As the name implies, a time-series model looks at data during a given period, such as a day, month or year. Using predictive analytics, it’s possible to estimate what the trend will be in an upcoming period. It can be combined with other methods to understand underlying factors. For instance, a healthcare system might predict when the flu will peak—based on past and current time-series models.
In more advanced scenarios, predictive analytics also uses deep learning techniques that mimic the human brain through artificial neural networks. These methods may incorporate video, audio, text and other forms of unstructured data. For instance, voice recognition or facial recognition might analyze the tone or expression a person displays, and a system can then respond accordingly.
Also see: Top Business Intelligence Software
All major predictive analytics platforms are capable of producing valuable insights. It’s important to conduct a thorough internal review and understand what platform or platforms are the best fit for an enterprise. All predictive analytics solutions generate reports, charts, graphics and dashboards. The data must also be embedded into automation processes that are driven by other enterprise applications, such as an ERP or CRM system.
An internal evaluation should include the types of predictive analytics you need and what you want to do with the data. It should also include what type of user—business analyst or data scientist—will use the platform. This typically requires a detailed discussion with different business and technical groups to determine what types of analytics, models, insights and automations are needed and how they will be used.
The capabilities of today’s predictive analytics platforms is impressive—and companies add features and capabilities all the time. It’s critical to review your requirements and find an excellent match. This includes more than simply extracting value from your data. It’s important to review a vendor’s roadmap, its commitment to updates and security, and what quality assurance standards it has in place. Other factors include mobile support and scalability, Internet of Things (IoT) functionality, APIs to connect data with partners and others in a supply chain, and training requirements to get everyone up to speed.
Critical factors for vendor selection include support for required data formats, strong data import and export capabilities, cleansing features, templates, workflows and embedded analytics capabilities. The latter is critical because predictive analytics data is typically used across applications, websites and companies.
A credit card check, for example, must pull data from a credit bureau but also internal and other partner systems. The APIs that run the task are critical. But there are other things to focus on, including the user interface (UI) and usability (UX). This extends to visual dashboards that staff uses to view data and understand events. It’s also vital to look at licensing costs and the level of support a vendor delivers. This might include online resources and communities as well as direct support.
Here are 10 of the top predictive analytics solutions:
Key Insight: The drag-and-drop platform is adept at generating rich visualizations and excellent dashboards. It ranks high on flexibility, with support for almost any type of desired chart or graphics. It can generate valuable data for predictive analytics by connecting to numerous other data sources, including Microsoft SQL and Excel, Snowflake, SAP HANA, Salesforce and Amazon Redshift. It also includes powerful tools for selecting parameters, filtering data, building data-driven workflows and obtaining results. While the platform is part of Google Cloud and it is optimized for use within this environment, it supports custom applications.
Key Insight: The predictive analytics platform is designed to put statistical data to work across a wide array of industries and use cases. It includes powerful data ingestion capabilities, ad hoc reporting, predictive analytics, hypothesis analysis, statistical and geospatial analysis and 30 base machine learning components. IBM SPSS Modeler includes a rich set of tools and features, accessible through sophisticated dashboards.
Key Insight: Qlik Sense is a cloud-native platform designed specifically for business intelligence and predictive analytics. It delivers a robust set of features and capabilities, available through dashboards and visualizations. Qlik Sense includes AI and machine learning components; embedded analytics that can be used for websites, business applications and commercial software; and strong support for mobile devices. The solution supports hundreds of data types, and a broad array of analytics use cases.
Key Insight: The widely used CRM and sales automation platform includes powerful analytics and business intelligence tools, including features driven by the company’s AI-focused Einstein Analytics. It delivers insights and suggestions through specialized AI agents. A centralized Salesforce dashboard offers charts, graphs and other insights, along with robust reporting capabilities. There’s also deep integration with the Tableau analytics platform, which is owned by Salesforce.
Key Insight: Formerly known as BusinessObjects for cloud, SAP Analytics Cloud can pull data from a broad array of sources and platforms. It is adept at data discovery, compilation, ad-hoc reporting and predictive analytics. It includes machine learning and AI functions that can guide data analysis and aid in modeling and planning. A dashboard offers flexible options for displaying analytics data in numerous ways.
Key Insight: The low-code cloud solution is designed to serve as a “single application for reporting, data exploration and analytics.” It imports data from numerous sources; supports rich dashboards and visualizations, with strong drill-down features; includes augmented analytics and ML; and includes robust collaboration and data sharing features. The platform also includes natural language chatbots that aid business users and other non-data scientists in content creation and management.
Key Insight: The enormous popularity of Tableau is based on the platform’s powerful features and its ability to generate a wide range of appealing and useful charts, graphs, maps and other visualizations through highly interactive real-time dashboards. The platform offers support for numerous predictive analytics frameworks, including regression models, classification models, clustering models, time-series models. Non data scientists typically find its user interface accommodating and easy-to-learn.
Key Insight: Teradata Vantage delivers powerful predictive analytics capabilities, including the ability to use data from both on-premises legacy hardware sources and multicloud public frameworks, including AWS, Azure and Google Cloud. The solution also works across virtually any data source, including data lakes and data warehouses. It supports sophisticated AI and ML functionality and includes no-code and low-code drag and drop components for building models and visuals. The company is especially known for its fraud prevention analytics tools, though it offers numerous other predictive tools and capabilities.
Key Insight: TIBCO Spotfire offers a powerful platform for performing predictive analytics. It connects to numerous data sources and includes real-time feeds that can be highly filtered and customized. The solution is designed for both business users and data scientists. It includes rich visualizations and supports customizations through R and Python.
Key Insight: The analytics cloud delivers highly flexible self-service predictive analytics. The query engine is designed to search on virtually any data format and understand complex table structures over billions of rows. It offers powerful search and filtering capabilities that extend to natural language queries. ThoughtSpot also provides a powerful processing engine that generates a range of visualizations, including social media intelligence. The solution includes a machine learning component that ranks the relevancy of results.
Also see: Top Cloud Companies
|Looker||Excellent drag-and-drop interface with appealing visualizations and a high level of flexibility||Support is almost entirely online|
|IBM||SPSS Modeler||Excellent open-source extensibility; strong drag-and-drop functionality||Not as user friendly as other analytics solutions|
|Qlik||Qlik Sense||Powerful and versatile platform with strong modeling features||Some complaints about customer support|
|Salesforce||Salesforce||Highly customizable; strong integration with other platforms and data sources||Expensive|
|SAP||Analytics Cloud||Supports extremely large datasets and has powerful capabilities||Visualizations are not always as appealing as other platforms|
|SAS||Visual Analytics||High performance platform that supports numerous data types and visualizations||May require customization|
|Tableau||Tableau Desktop||Outstanding UI and UX, with deep Salesforce/CRM integration||Can drain CPUs and RAM|
|TIBCO||Spotfire||Powerful predictive analytics platform with excellent visual output||Steep learning curve|
|Teradata||Advantage||Highly flexible with a powerful SQL engine; generates rich visualizations||Some users complain about an aging UI.|
|ThoughtSpot||ThoughtSpot||Flexible, powerful capabilities with a strong UI and UX||Visualizations sometimes lag behind competitors|
Alex Wiltschko began collecting perfumes as a teenager. His first bottle was Azzaro Pour Homme, a timeless cologne he spotted on the shelf at a T.J. Maxx department store. He recognized the name from Perfumes: The Guide, a book whose poetic descriptions of aroma had kick-started his obsession. Enchanted, he saved up his allowance to add to his collection. “I ended up going absolutely down the rabbit hole,” he said.
More recently, as an olfactory neuroscientist for Google Research’s Brain Team, Wiltschko used machine learning to dissect our most ancient and least understood sense. Sometimes he looked almost longingly at his colleagues studying the other senses. “They have these beautiful intellectual structures, these cathedrals of knowledge,” he said, that explain the visual and auditory world, shaming what we know about olfaction.
Recent work by Wiltschko and his colleagues, however, is helping to change that. In a paper first posted on the biorxiv.org preprint server in July, they described using machine learning to tackle a long-standing challenge in olfactory science. Their findings significantly improved researchers’ ability to compute the smell of a molecule from its structure. Moreover, the way they improved those calculations gave new insights into how our sense of smell works, revealing a hidden order in how our perceptions of smells correspond to the chemistry of the living world.
When you inhale a whiff of your morning coffee, 800 different types of molecules travel to your smell receptors. From the complexity of this rich chemical portrait, our brains synthesize an overall perception: coffee. Researchers have found it exceptionally difficult, however, to predict what even a single molecule will smell like to us humans. Our noses host 400 different receptors for detecting the chemical makeup of the world around us, and we are only beginning to fathom how many of those receptors can interact with a given molecule. But even with that knowledge, it isn’t clear how combinations of odor inputs map onto our perceptions of fragrances as sweet, musky, disgusting and more.
“There was no clear model that would give you predictions for what most molecules smell like,” said Pablo Meyer, who studies biomedical analytics and the modeling of olfaction at IBM Research and was not involved in the latest study. Meyer decided to make the iconic structure-to-scent problem the focus of IBM’s 2015 DREAM challenge, a computing crowdsourcing competition. Teams competed to build models that could predict a molecule’s odor from its structure.
But even the best models couldn’t explain everything. Peppered throughout the data were pesky, irregular cases that resisted prediction. Sometimes, small tweaks to a molecule’s chemical structure yielded a totally new odor. Other times, major structural changes barely changed the odor.
To try to explain these irregular cases, Wiltschko and his team considered the requirements that evolution might have levied on our senses. Each sense has been tuned over millions of years to detect the most salient range of stimuli. For human vision and hearing, that’s light of wavelengths from 400-700 nanometers and sound waves between 20 and 20,000 hertz. But what governs the chemical world detected by our noses?
“The one thing that’s been constant over evolutionary time, at least from a very long time ago, is the core metabolic engine inside of every living thing,” said Wiltschko, who recently left Google Research to become an entrepreneur-in-residence at Alphabet’s venture capital subsidiary, GV.
Metabolism refers to the sets of chemical reactions — including the Krebs cycle, glycolysis, the urea cycle and many other processes — that are catalyzed by cellular enzymes and that convert one molecule into another in cells. These well-worn reaction pathways define a map of relationships among the naturally occurring chemicals that waft into our noses.
Wiltschko’s hypothesis was simple: Perhaps chemicals that smell similar are not just chemically related, but biologically related as well.
To test the idea, his team needed a map of the metabolic reactions that occur in nature. Fortunately, scientists in the field of metabolomics had already constructed a large database that outlined these natural chemical relationships and the enzymes that precipitate them. With this data, the researchers could pick two odorous molecules and calculate how many enzymatic reactions it would take to convert one to the other.
For comparison, they also needed a computer model that could quantify how various odorous molecules smell to humans. To that end, Wiltschko’s team had been refining a neural network model called the principal odor map that built on the findings of the 2015 DREAM competition. This map is like a cloud of 5,000 points, each representing one molecule’s scent. The points for molecules that smell similar cluster together, and ones that smell very different are far apart. Because the cloud is much more than 3D — it holds 256 dimensions of information — only advanced computing tools can grapple with its structure.
The researchers looked for corresponding relationships within the two data sources. They sampled 50 pairs of molecules and found that chemicals that were closer on the metabolism map also tended to be closer on the scent map, even if they had very different structures.
Wiltschko was astonished by the correlation. The predictions still weren’t perfect, but they were better than any previous model had achieved with chemical structure alone, he said.
“That didn’t have to happen at all,” he said. “Two molecules that are biologically similar, like one enzyme catalysis step away, they could smell like roses and rotten eggs.” But they didn’t. “And that is crazy to me. It’s beautiful to me.”
The researchers also found that molecules that generally occur together in nature — for example, the different chemical components of an orange — tend to smell more similar than molecules without a natural association.
The findings are “intuitive and elegant,” said Robert Datta, a neurobiologist at Harvard Medical School and Wiltschko’s former doctoral adviser, who was not involved in the latest study. “It’s like the olfactory system is built to detect a variety of [chemical] coincidences,” he said. “So metabolism governs the coincidences that are possible.” This indicates that there’s another feature besides a molecule’s chemical structure that matters to our noses — the metabolic process that produced the molecule in the natural world.
“The olfactory system is tuned for the universe it sees, which are these structures of molecules. And how these molecules are made is part of that,” said Meyer. He praised the cleverness of the idea of using metabolism to refine the categorization of scents. Although the metabolism-based map doesn’t drastically Strengthen on structural models, since a molecule’s metabolic origin is already closely related to its structure, “it does bring some extra information,” he said.
The next frontier of olfactory neuroscience will involve the odors of mixtures instead of individual molecules, Meyer predicts. In real life, we very rarely inhale just one chemical at a time; think of the hundreds wafting from your coffee mug. Right now, scientists don’t have enough data on odorant mixtures to build a model like the one for pure chemicals used in the latest study. To truly understand our sense of smell, we’ll need to examine how constellations of chemicals interact to form complex odors like those in Wiltschko’s perfume bottles.
This project has already changed how Wiltschko thinks about his lifelong passion. When you experience a smell, “you are perceiving parts of another living thing,” he said. “I just think that’s really beautiful. I feel more connected to life that way.”
Editor’s note: Datta, an investigator with the Simons Collaboration on Plasticity and the Aging Brain and SFARI, receives funding from the Simons Foundation, which also sponsors this editorially independent magazine.
The development of land, air, and sea vehicles with low drag and good stability has benefited greatly from the huge strides made in Computational Fluid Dynamics (CFD). Simulation of fluid flow over three-dimensional computer representations of a vehicle requires the solving of Navier-Stokes equations through many hundreds—and often thousands—of iterations. The result is an approximation of the flow field and pressure distribution that can be used to visualize the flow through streamlines and other techniques. The downside is the enormous amount of computing time that it takes—many hours or even days—to achieve a reasonably accurate and converged solution.
Now, Nobuyuki Umetani, formerly from Autodesk research (and now at the University of Tokyo), and Bernd Bickel, from the Institute of Science and Technology Austria (IST Austria), have devised a way to speed these simulations. They have developed a method using machine learning that “learns” to mode flow around three-dimensional objects, making streamlines and parameters like drag coefficient available in real time.
Using machine learning to help predict fluid flow came from discussions between Umetani and Bickel, long-time collaborators in CFD. "We both share the vision of making simulations faster," explained Bickel in an IST news release. "We want people to be able to design objects interactively, and therefore we work together to develop data-driven methods."
The new fluid flow simulation technique shows streamlines and pressure distribution on the vehicle surface (color-coded). On the left are some of the shapes used to train the program. On the right are the results of new vehicles simulated by the program. (Image source: Nobuyuki Umetani)
Machine Learning in Training
The technique the pair developed involves “training” the machine learning program on the converged CFD data for a variety of shapes and vehicle designs that are representative of typical vehicles. More than 800 vehicle shapes were used to train the program. Once the program has been trained, a process using Gaussian Process regression is utilized to infer the velocities and pressures for a new shape based on all of the previous vehicles and shapes. "With our machine learning tool, we are able to predict the flow in fractions of a second," said Nobuyuki Umetani in the IST release.
Machine learning has some restrictive requirements that had to be overcome in the development of this method. In machine learning, both the input and the output data need to be structured in a way that is consistent. This is relatively easy to accomplish with two-dimensional images, where a regular arrangement of pixels can represent the object. In three dimensions, however, the geometric objects define the shape. With a mesh of triangles, for example, the arrangement of the triangles can change if the shape changes, resulting in an inconsistency.
Umetani’s solution was to adapt polycubes to build a shape that could be used with machine learning. The polycube approach was originally developed to apply textures to objects in computer animations. The IST release described their use in this way: “A model starts with a small number of large cubes, which are then refined and split up in smaller ones following a well-defined procedure. If represented in this way, objects with similar shapes will have a similar data structure that machine learning methods can handle and compare.”
Aside from the huge time savings, the method described allows modifications and shape changes to be made in real time by interactively pulling and pushing the polycubes. The changes in drag coefficient, surface pressure distribution, and flow field streamlines are shown nearly instantly. As a result, the designer or stylist can immediately see the effect of their shape changes. This video shows the interactive capability of the new program.
A paper written by Umetani and Bickel and published in the journal ACM Transactions in Graphics also details the accuracy of the method. The results show similar errors (approximately 3.4% in drag coefficient) as do other CFD techniques, which is consistent with the error expected when various wind tunnels are compared to one another using similar conditions.
One reason for the high level of accuracy comes directly from machine learning. "When simulations are made in the classical way, the results for each tested shape are eventually thrown away after the computation. This means that every new computation starts from scratch. With machine learning, we make use of the data from previous calculations, and if we repeat a calculation, the accuracy increases,” explained Umetani.
Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation Topics for more than 30 years. He has masters degrees in Materials Engineering and Environmental Education and a doctorate degree in Mechanical Engineering, specializing in aerodynamics. He has set several world land speed records on electric motorcycles that he built in his workshop.
|Today's Insights. Tomorrow's Technologies.
ESC returns to Minneapolis, Oct. 31-Nov. 1, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter. Click here to register today!
52 week range
114.56 - 144.73
Accurately modeling extreme precipitation events remains a major challenge for climate models. These models predict how the earth's climate may change over the course of decades and even centuries. To Strengthen them especially with regard to extreme events, researchers now use machine learning methods otherwise applied to image generation.
Computers already use artificial intelligence to Strengthen the resolution of fuzzy images, to create images imitating the style of particular painters based on photographs, or to render realistic portraits of people who do not actually exist. The underlying method is based on what are referred to as GANs (Generative Adversarial Networks).
A team led by Niklas Boers, Professor for Earth System Modeling at the Technical University of Munich (TUM) and researcher at the Potsdam Institute for Climate Impact Research (PIK) is now applying these machine learning algorithms to climate research. The research group recently published its findings in Nature Machine Intelligence.
Not all processes can be taken into account
"Climate models differ from the models used to make weather forecasts, especially in terms of their broader time horizon. The forecast horizon for weather predictions is several days, while climate models perform simulations over decades or even centuries," explains Philipp Hess, lead author of the study and research associate at the TUM Professorship for Earth System Modeling.
Weather can be predicted fairly exactly for a few days; the prediction can then subsequently be Tested based on real observations. When it comes to climate, however, the objective is not a time-based prediction, but among other things projections of how increasing greenhouse gas emissions will impact the Earth's climate in the long run.
However, climate models still can't take all relevant climate processes perfectly into account. This is on the one hand because some processes have not yet been understood sufficiently, and on the other hand because detailed simulations would take too long and require too much computing power. "As a result, climate models still can't represent extreme precipitation events the way we'd like. Therefore, we started using GANs to optimize these models with regard to their precipitation output," says Niklas Boers.
Optimizing climate models with weather data
Roughly speaking, a GAN consists of two neural networks. One network attempts to create an example from a previously defined product, while the other tries to distinguish this artificially generated example from real examples. The two networks thus compete with one another, continuously improving in the process.
One practical application of GANs would be "translating" landscape paintings into realistic photographs. The two neural networks take photo-realistic images generated on the basis of the painting and send them back and forth until the images created can no longer be distinguished from real photographs.
Niklas Boers' team took a similar approach: The researchers used a comparably simple climate model to demonstrate the potential of using machine learning to Strengthen such models. The team's algorithms use observed weather data. Using this data the team trained the GAN to change the simulations of the climate model so that they could no longer be distinguished from real weather observations.
"This way the degree of detail and realism can be increased without the need for complicated additional process calculations," says Markus Drücke, climate modeler at PIK and co-author of the study.
GANs can reduce electricity consumed in climate modeling
Even relatively simple climate models are complex and are processed using supercomputers which consume large amounts of energy. The more details the model takes into account, the more complicated the calculations become and the greater the amount of electricity used. The calculations involved in applying a trained GAN to a climate simulation are however negligible compared to the amount of calculation required for the climate model itself.
"Using GANs to make climate models more detailed and more realistic is thus practical not only for the improvement and acceleration of the simulations, but also in terms of saving electricity," Philipp Hess says.
Climate model code: zenodo.org/record/4700270#.YzxNXXbMJhE
Analysis code: codeocean.com/capsule/3633745/tree/v1
Citation: Climate simulation more realistic with artificial intelligence (2022, October 4) retrieved 17 October 2022 from https://phys.org/news/2022-10-climate-simulation-realistic-artificial-intelligence.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
IBM, which three years ago acquired Red Hat, is now moving Red Hat OpenShift Data Foundation and Red Hat Ceph, along with their development teams, into IBM Storage as part of a move to make a bigger play in the software-defined and open-source storage worlds.
IBM Tuesday said it has absorbed storage technology and teams from its Red Hat business to combine them with IBM’s own storage business unit as a way to help clients take advantage of the two without requiring extra integration or having to deal with multiple sales teams.
IBM is integrating Red Hat OpenShift Data Foundation with its IBM Spectrum Fusion and will offer Red Hat Ceph-based storage technologies to its clients in a move to continue Big Blue’s software-defined storage leadership, said Brent Compton, senior director of Data Foundation for Red Hat’s hybrid cloud business.
For IBM, which in mid-2019 acquired Red Hat in a $34-billion deal, the move ensures maximum support for Red Hat OpenShift Data Foundation and Ceph, Compton told CRN.
[Related: 2022 Storage 100: Who’s Got Your Backup?]
“OpenShift Data Foundation and Ceph will become a big part of IBM Storage,” he said. “IBM has been looking for a way to take advantage of Ceph and ODF, and now it can.”
Ceph is an open-source software-defined object storage technology with interfaces for object, block and file storage. Red Hat OpenShift Data Foundation is a software-defined container-native storage that provides cluster data management capabilities as part of the OpenShift container platform.
Scott Baker, chief marketing officer and vice president of IBM hybrid cloud portfolio and product marketing, told CRN the move to combine Red Hat and IBM storage technologies sets the stage for growth in the combined software-defined storage portfolio.
“Customers not only get a choice of where storage runs—at the edge, in the cloud, or on-prem—but will find storage software releases will no longer be tied to the timing of storage hardware releases,” Baker said. “For instance, IBM normally enhances its Spectrum Virtualize or Spectrum Scale with new versions of the IBM FlashSystem. But with software-defined storage, we can drive changes quicker if they’re not tied to hardware releases.”
By bringing Red Hat OpenShift Data Foundation and Ceph into IBM, customers get the opportunity to access unified block, file, and object storage without regard to the real underlying hardware, Baker said.
“They can use Ceph to add the right type of storage depending on the protocol they need,” he said. “Ceph and ODF also simplifies how IBM provides data storage and protection. To do all that with IBM’s storage portfolio takes time. With CEF and ODF as part of IBM Storage, this can get done immediately.”
It really is the best of both worlds, as Red Hat customers will also see strong benefits from IBM Storage, Compton said.
“It’s important to note that IBM will continue to offer OpenShift Data Foundation inside the Red Hat OpenShift Platform Plus hybrid cloud platform,” he said. “So if a customer gets pre-integrated OpenShift Data Foundation inside Red Hat OpenShift Platform Plus, it accelerates their time to market. There’s no need to integrate the storage. This will not change.”
Also, Red Hat OpenShift customers have used Ceph to accelerate their time to scale for years, and Red Hat will continue to sell Ceph, Compton said.
“But by moving Ceph to IBM Storage, IBM will accelerate development of the storage-specific features,” he said. “Red Hat is not a storage company. So this will accelerate development of unified capabilities.”
IBM’s storage move makes good on the potential many saw with the company’s acquisition of Red Hat, said John Teltsch, chief revenue officer at Converge Technology Solutions, a Gatineau, Quebec-based solution provider and channel partner to both IBM and Red Hat that ranked No. 36 on CRN’s 2022 Solution Provider 500.
“This is something the channel has been waiting for ever since IBM acquired Red Hat,” Teltsch told CRN. “IBM has been doing a lot around software-defined storage. And when you add in Red Hat, it gives us an integrated solutions play. It lets us build an integrated sales team. We don’t have to first talk about IBM storage capabilities, and then bring in our Red Hat team to talk about Red Hat.”
Converge Technology Partners’ IBM and Red Hat sales teams are currently two separate teams, said Teltsch, who joined the company in March from IBM, where he held numerous sales leadership roles, including two years as Big Blue’s channel chief.
“Once IBM and Red Hat storage are together, it gets more simple to sell,” he said. “And it simplifies our training while IBM will have one integrated set of offerings for its clients. This lets us bring the best of Red Hat open-source capabilities with IBM storage. We’re living in a data-driven world. This move simplifies our go-to-market, as well as simplifies the client experience, client engagement, and client adoption.”
New Jersey, United States, Oct. 12, 2022 /DigitalJournal/ In a nutshell, learning analytics is primarily focused on collecting and leveraging student learning progress data to optimize learning outcomes. This includes understanding each students time on course work and determining a students progress over time.
The learning analytics (LA) market is growing rapidly in educational institutions worldwide, especially for higher education and MOOC providers, primarily driven by the need to Strengthen the student success rate, material efficiency of the course, retention, and learning experience. Learning analytics follows learners digital footprints, transforming how they view learning processes and enabling data-driven decisions to maximize student success. It has continued to evolve as technology continues to be one of the main drivers of educational change.
Get the PDF trial Copy (Including FULL TOC, Graphs, and Tables) of this report @:
The Learning Analytics Market research report provides all the information related to the industry. It gives the markets outlook by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This Learning Analytics market research report tracks all the latest developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.
This Learning Analytics research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.
Some of the Top companies Influencing this Market include:Oracle, Blackboard, IBM, Microsoft, Pearson, Saba Software, Sum Total System, Mcgraw-Hill Education, SAP, Desire2learn,
Firstly, this Learning Analytics research report introduces the market by providing an overview that includes definitions, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the Learning Analytics report.
The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:
Segmentation Analysis of the market
The market is segmented based on the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market
Market Segmentation: By Type
On-premises, Cloud Based,
Market Segmentation: By Application
For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization
An assessment of the market attractiveness about the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants in the global Learning Analytics market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.
This report aims to provide:
Table of Contents
Global Learning Analytics Market Research Report 2022 – 2029
Chapter 1 Learning Analytics Market Overview
Chapter 2 Global Economic Impact on Industry
Chapter 3 Global Market Competition by Manufacturers
Chapter 4 Global Production, Revenue (Value) by Region
Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions
Chapter 6 Global Production, Revenue (Value), Price Trend by Type
Chapter 7 Global Market Analysis by Application
Chapter 8 Manufacturing Cost Analysis
Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers
Chapter 10 Marketing Strategy Analysis, Distributors/Traders
Chapter 11 Market Effect Factors Analysis
Chapter 12 Global Learning Analytics Market Forecast
Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout
1887 WHITNEY MESA DR HENDERSON, NV 89014
+1 775 237 4157