Money back guarantee of 000-647 Free Exam PDF at

With all the assistance of the particularly tested IBM Rational Performance Tester Free Exam PDF and Questions and Answers you may figure out just how to create your own 000-647 knowledge. Our 000-647 dumps are usually updated also to the purpose. The IBM 000-647 free pdf make your own vision tremendous plus help you extremely in planning associated with the 000-647 exam.

Exam Code: 000-647 Practice test 2022 by team
Rational Performance Tester
IBM Performance plan
Killexams : IBM Performance plan - BingNews Search results Killexams : IBM Performance plan - BingNews Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Boost future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Boost quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Boost the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex,, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html
Killexams : IBM Research Albany Nanotech Center Is A Model To Emulate For CHIPS Act

With the passage of the CHIPS+ Act by Congress and its imminent signing by the President of the United States, a lot of attention has been paid to the construction of new semiconductor manufacturing megasites by Intel, TSMC, and Samsung. But beyond the manufacturing side of the semiconductor business, there is a significant need to invest in related areas such as research, talent training, small and medium business development, and academic cooperation. I recently had the opportunity to tour a prime example of such a facility that integrates all these other aspects of chip manufacturing into a tight industry, government, and academic partnership. That partnership has been going on for over 20 years in Albany New York where IBM Research has a nanotechnology center that is located adjacent to the State University of New York (SUNY) Albany campus. With significant investment by New York State through the New York Creates NY CREATES development agency, IBM in close partnership with several universities and industry partners is developing state-of-the-art semiconductor process technologies in working labs for the next generation of computer chips.

The center provides a unique facility for semiconductor research – its open environment facilitates collaboration between leading equipment and materials suppliers, researchers, engineers, academics, and EDA vendors. Presently, IBM has a manufacturing and research partnership with Samsung Electronics and a research partnership was announced with Intel last year. Key chipmaking suppliers such as ASML, KLA, and Tokyo Electric (TEL) have equipment installed, and are working actively with IBM developing advanced processes and metrology for leading edge technologies.

These facilities do not come cheap. It takes billions of dollars of investment and many years of research to achieve each new breakthrough. For example, the High-k metal gate took 15 years to go into products; the FinFET transistor, essential today, took 13 years; and the next generation transistor, the gate-all-around/nano sheet, which Samsung is putting into production now, was in development for 14 years. In addition, the cost to manufacture chips at each new process node is increasing 20-30% and the R&D costs are doubling for each node’s development. To continue supporting this strategic development, there needs to be a partnership between industry, academia, and government.

IBM Makes The Investment

You might ask why IBM, which sold off its semiconductor manufacturing facilities over the years, is so involved in this deep and expensive research. Well, for one, IBM is very, very good at semiconductor process development. The company pioneered several critical semiconductor technologies over the decades. But being good at a technology does not pay the bills, so IBM’s second motivation is that the company needs the best technology for its own Power and Z computers. To that end, IBM is primarily focused on developments that support high-performance computing and AI processing.

Additional strategic suppliers and partners help to scale these innovations beyond just IBM’s contribution. The best equipment from the world-class equipment suppliers provides a testbed for partners to experiment and advance the state-of-the-art technology. IBM along with its equipment partners have built specialized equipment where needed to experiment beyond the capabilities of standard equipment.

But IBM only succeeds if it can transfer the technology from the labs into production. To do so, IBM and Samsung have been working closely on process developments and the technology transfer.

MORE FROM FORBESIBM Goes Vertical To Scale Transistors

The NanoTech Center dovetails with the CHIPS Act in that it will allow the United States to develop leadership in manufacturing technologies. It can also allow smaller companies to test innovative technologies in this facility. The present fab building is running 24/7/365 and is highly utilized, but there’s space to build another building that can double significantly expand the clean room space. There’s also a plan for a building that will be able to support the next generation of ASML EUV equipment called high NA EUV.

The Future is Vertical

The Albany site also is a center for chiplet technology research. As semiconductor scaling slows, unique packaging solutions for multi-die chips will become the norm for high-performance and power-efficient computing. IBM Research has an active program of developing unique 2.5D and 3D die-stacking technologies. Today the preferred substrate for building these multi-die chips is still made from silicon, based on the availability of tools and manufacturing knowledge. There are still unique process steps that must be developed to handle the specialized processing, including laser debonding techniques.

IBM also works with test equipment manufacturers because building 3D structures with chiplets presents some unique testing challenges. Third party EDA vendors also need to be part of the development process, because the ultimate goal of chiplet-based design is to be able to combine chips from different process nodes and different foundries.

Today chiplet technology is embryonic, but the future will absolutely need this technology to build the next generation of data center hardware. The is a situation where the economics and technology are coming together at the right time.


The Albany NanoTech Center is a model for the semiconductor industry and demonstrates one way to bring researchers from various disciplines and various organizations together to advance the state-of-the-art semiconductor technology. But this model also needs to scale up and be replicated throughout North America. With more funding and more scale, there also needs to be an appropriately skilled workforce. Here is where the US needs to make investments in STEM education on par with the late 1950s Space Race and sites like Albany that offer R&D on leading-edge process development that should inspire more students to go into physics, chemistry, and electrical engineering and not into building the next crypto currency startup.

Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM, Intel, GlobalFoundries, Samsung, and other foundries.

Mon, 08 Aug 2022 11:08:00 -0500 Kevin Krewell en text/html
Killexams : IBM Stock Down 6.6% after Earnings; Headwinds Likely to Persist No result found, try new keyword!It's got some very exciting prospects in the pipeline for the future, and having a piece of the same might be a good plan overall. IBM trading ... based on measured performance and the accuracy ... Tue, 19 Jul 2022 09:05:00 -0500 text/html Killexams : Scientists say they've debunked Google’s quantum supremacy claims once and for all No result found, try new keyword!New paper shows traditional hardware can match the performance of Google’s quantum computer A team of scientists in China claim to have replicated the performance of Google’s Sycamore quantum computer ... Fri, 05 Aug 2022 07:35:39 -0500 en-us text/html Killexams : Everything Falcons fans need to know ahead of open practices Friday

FLOWERY BRANCH, Ga. (CBS46) - With the Atlanta Falcons set to open training camp practices free and open to fans on Friday, here is important information you will need to know if you plan to attend.

The first practice open to fans in 2022 is scheduled at the IBM Performance Field in Flowery Branch beginning at 9:30 a.m., team officials said.

Make sure to pay attention to the forecast as Friday is expected to be another hot and humid day in metro Atlanta with clouds building through the afternoon.

Just in case of stormy weather in your area, bring your umbrellas, a blanket and remember to follow all of the NFL policies on COVID-19 health and safety. There could potentially be opportunities open for autographs or to take photos, so bring your markers, posters and camera phones.

Make sure to get there early to get a good parking spot.

You can also download our CBS46 mobile news app and check for traffic updates to check the traffic alerts as you plan to head to the team’s Flowery Branch training facility.

The team held day 1 of training camp practices with veterans and rookies on Wednesday, before amping up the intensity and urgency on Thursday.

Falcons team officials say head coach Arthur Smith and general manager Terry Fontenot will speak at an upcoming practice, while Falcons legends, the mascot Freddie Falcon and Falcons cheerleaders will be in attendance. Food trucks and an official team merchandise tent will also be on-site for fans.

For more information on all of the open training camp practices, click here.

In case you’re unable to attend Friday but plan on attending at a future date, here is the 2022 Atlanta Falcons Training Camp Open Dates schedule:

  • Saturday, July 30 | IBM Performance Field | 9:30 a.m.
  • Monday, August 1 | IBM Performance Field | 10 a.m.
  • Tuesday, August 2 | IBM Performance Field | 9:30 a.m.
  • Wednesday, August 3 | IBM Performance Field | 9:30 a.m.
  • Friday, August 5 | IBM Performance Field | 9:30 a.m.
  • Saturday. August 6 | IBM Performance Field | 9:30 a.m.
  • Monday, August 8 | IBM Performance Field | 10 a.m.
  • Tuesday, August 9 | IBM Performance Field | 9:30 a.m.
  • Wednesday, August 10 | IBM Performance Field | 9:30 a.m.
  • Monday. August 15 | Mercedes-Benz Stadium | 6:30 p.m.
  • Wednesday, August 24 | IBM Performance Field | Joint practices with Jacksonville | 1 p.m.
  • Thursday, August 25 | IBM Performance Field | Joint practices with Jacksonville | 1 p.m.
Thu, 28 Jul 2022 12:37:00 -0500 en text/html
Killexams : IBM claims to have mapped out a route to quantum advantage No result found, try new keyword!A lot of work is going into improving performance by increasing the ... quantum hardware currently available. “At IBM Quantum, we plan to continue developing our hardware and software with ... Thu, 21 Jul 2022 04:20:00 -0500 en-us text/html Killexams : Businesses confess: We pass cyberattack costs onto customers No result found, try new keyword!Almost 50 percent of the costs of a breach are incurred more than a year after the incident, IBM found. Such numbers show not only that a given organization will likely sustain a data breach, but that ... Thu, 28 Jul 2022 18:35:17 -0500 en-us text/html Killexams : International Business Machines Corporation (IBM) Is a Trending Stock: Facts to Know Before Betting on It

IBM (IBM) has recently been on's list of the most searched stocks. Therefore, you might want to consider some of the key factors that could influence the stock's performance in the near future.

Shares of this technology and consulting company have returned -6.4% over the past month versus the Zacks S&P 500 composite's +7.8% change. The Zacks Computer - Integrated Systems industry, to which IBM belongs, has lost 2.5% over this period. Now the key question is: Where could the stock be headed in the near term?

Although media reports or rumors about a significant change in a company's business prospects usually cause its stock to trend and lead to an immediate price change, there are always certain fundamental factors that ultimately drive the buy-and-hold decision.

Earnings Estimate Revisions

Here at Zacks, we prioritize appraising the change in the projection of a company's future earnings over anything else. That's because we believe the present value of its future stream of earnings is what determines the fair value for its stock.

Our analysis is essentially based on how sell-side analysts covering the stock are revising their earnings estimates to take the latest business trends into account. When earnings estimates for a company go up, the fair value for its stock goes up as well. And when a stock's fair value is higher than its current market price, investors tend to buy the stock, resulting in its price moving upward. Because of this, empirical studies indicate a strong correlation between trends in earnings estimate revisions and short-term stock price movements.

IBM is expected to post earnings of $1.88 per share for the current quarter, representing a year-over-year change of -25.4%. Over the last 30 days, the Zacks Consensus Estimate has changed -27.5%.

The consensus earnings estimate of $9.47 for the current fiscal year indicates a year-over-year change of +19.4%. This estimate has changed -4.3% over the last 30 days.

For the next fiscal year, the consensus earnings estimate of $10.05 indicates a change of +6.2% from what IBM is expected to report a year ago. Over the past month, the estimate has changed -6.3%.

With an impressive externally audited track record, our proprietary stock rating tool -- the Zacks Rank -- is a more conclusive indicator of a stock's near-term price performance, as it effectively harnesses the power of earnings estimate revisions. The size of the exact change in the consensus estimate, along with three other factors related to earnings estimates, has resulted in a Zacks Rank #4 (Sell) for IBM.

The chart below shows the evolution of the company's forward 12-month consensus EPS estimate:

12 Month EPS

12-month consensus EPS estimate for IBM _12MonthEPSChartUrl

Revenue Growth Forecast

While earnings growth is arguably the most superior indicator of a company's financial health, nothing happens as such if a business isn't able to grow its revenues. After all, it's nearly impossible for a company to increase its earnings for an extended period without increasing its revenues. So, it's important to know a company's potential revenue growth.

For IBM, the consensus sales estimate for the current quarter of $13.91 billion indicates a year-over-year change of -21%. For the current and next fiscal years, $59.9 billion and $61.2 billion estimates indicate -15.4% and +2.2% changes, respectively.

Last Reported Results and Surprise History

IBM reported revenues of $15.54 billion in the last reported quarter, representing a year-over-year change of -17.1%. EPS of $2.31 for the same period compares with $2.33 a year ago.

Compared to the Zacks Consensus Estimate of $15.12 billion, the reported revenues represent a surprise of +2.75%. The EPS surprise was +0.87%.

Over the last four quarters, IBM surpassed consensus EPS estimates three times. The company topped consensus revenue estimates two times over this period.


Without considering a stock's valuation, no investment decision can be efficient. In predicting a stock's future price performance, it's crucial to determine whether its current price correctly reflects the intrinsic value of the underlying business and the company's growth prospects.

While comparing the current values of a company's valuation multiples, such as price-to-earnings (P/E), price-to-sales (P/S) and price-to-cash flow (P/CF), with its own historical values helps determine whether its stock is fairly valued, overvalued, or undervalued, comparing the company relative to its peers on these parameters gives a good sense of the reasonability of the stock's price.

The Zacks Value Style Score (part of the Zacks Style Scores system), which pays close attention to both traditional and unconventional valuation metrics to grade stocks from A to F (an An is better than a B; a B is better than a C; and so on), is pretty helpful in identifying whether a stock is overvalued, rightly valued, or temporarily undervalued.

IBM is graded B on this front, indicating that it is trading at a discount to its peers. Click here to see the values of some of the valuation metrics that have driven this grade.

Bottom Line

The facts discussed here and much other information on might help determine whether or not it's worthwhile paying attention to the market buzz about IBM. However, its Zacks Rank #4 does suggest that it may underperform the broader market in the near term.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
International Business Machines Corporation (IBM) : Free Stock Analysis Report
To read this article on click here.

Tue, 02 Aug 2022 03:56:00 -0500 en-SG text/html
Killexams : The Secret Weapon for Sustainable Business? AI.

Illustrations by Timo Lenzen

I n a city brimming with skyscrapers, One Vanderbilt still manages to stand out. At 1,401-feet, the two-year-old office tower is one of the tallest buildings in Manhattan. It’s also one of the greenest. The building was constructed with 90 percent recycled steel rebar. It has a state-of-the-art cogeneration system that keeps its energy use low and a 90,000-gallon rainwater collection system that recycles water for irrigation and cooling. As a result, it boasts one of the highest levels of LEED certification. “We see environmental sustainability as a social obligation. It’s not just a trend,” says Laura Vulaj, senior vice president of hospitality and sustainability at SL Green, the real estate company that owns the tower.

While One Vanderbilt is well ahead of the pack on sustainability, Vulaj knows that she and her team are nonetheless going to have to pick up the pace. In New York City, a rigorous new climate law is requiring landlords to dramatically reduce their environmental impact. That comes on top of state and federal regulations, United Nations targets, as well as requests from board members, investors, and potential tenants for data on environmental, social, and governance (ESG) performance.

Fulfilling those demands is particularly complex for a landlord like SL Green, which has hundreds of tenants and a vast portfolio of properties. According to Vulaj, requests for environmental compliance data have increased tenfold in exact years, and each new framework requires different reporting methodologies. So today, getting a full picture of SL Green’s environmental footprint, and figuring out how to shrink it, is a tall order. It requires aggregating and analyzing a mountain of data from a variety of sources across multiple buildings. “There’s energy data. There’s water data. There’s waste data,” Vulaj says. “Data is pouring in for every building, and it’s living in so many different areas. You get data fatigue.”

Vulaj’s data challenges are shared by leaders at many companies focused on sustainability. This year, according to a poll conducted by the IBM Institute for Business Value, more than half of CEOs ranked sustainability among their top concerns. Yet 44 percent of CEOs said they lack the ability to translate sustainability data into insights that help them meet environmental targets. “It’s coming to a head,” says Kareem Yusuf, Ph.D., general manager of IBM Sustainability Software. “Society cares a lot more, investors are using this to inform where they place their dollars, and regulation is only going to become more apparent.”

Faced with such challenges, companies like SL Green often come to IBM for help. “Often the conversation starts with, ‘I need to get a handle on this,” says Yusuf. “‘How can I make sense of all this data? I can’t do it with spreadsheets anymore.’” Yusuf’s solution for these organizations is simple: AI. “Machine learning can look at data, bring it together, and make sense of it—and then, most importantly, place it in front of you in a way that allows an informed, intelligent decision to be made,” Yusuf says. “It’s operationalizing sustainability.”

More and more companies are following this advice. According to an IBM study, two-thirds of IT leaders say their company is currently planning or already in the process of using AI to manage the complexity of data for sustainability. According to Witold Henisz, vice dean and faculty director of the Environment, Social and Governance Initiative at the Wharton School, that’s a huge shift. In years past, many companies kept such minimal sustainability data that they could effectively relegate it to a single spreadsheet column. Now, he says, the scale of the sustainability data companies are collecting requires more sophisticated technology. “This is a big data problem,” he says.

Bjarne Jørgensen, executive director of asset management and operations at Danish civil infrastructure operator Sund & Bælt, came to understand that problem well in 2020, when he began looking into how to preserve the Great Belt (Storebælt) Link, an 11-mile system of bridges and tunnels connecting the Danish islands Zealand and Funen. When it was built in the 1990s, Jørgensen says, the system looked indestructible. But it turned out to be no match for the ravages of the North Sea and climate change, which have deteriorated the system with fierce winds and tidal surges. “Our focus was on prolonging the lifetime of the bridge, which also reduces its carbon footprint, because if you have to rebuild something, it releases more carbon,” Jørgensen says.

To preserve the system, Sund & Bælt needed to know its health in real time. This meant processing between 12,000 and 14,000 data points collected from moisture-detecting sensors and a fleet of drones inspecting 300,000 square meters of concrete. And the data itself could only go so far. “Data doesn’t necessarily Boost your decisions,” says Jørgensen. “You have to see into it and find the essence in order to use it.” It’s a common complaint. “The promise of big data analytics is that we’re going to gain insight, which is going to help performance and address the climate transition,” says Henisz. “But it’s not a crystal ball. A lot of analysis has to be done.”

That analysis, in the case of the Great Belt Link, relied heavily on AI. Using IBM Maximo Civil Infrastructure and Maximo Application Suite for intelligent asset management helped Sund & Bælt generate penetrating, real-time analysis on the condition of the bridges, tunnels, and other critical infrastructure components. Harnessing the power of AI to analyze visual inspection data on rust, corrosion, displacement, and stress, alongside maintenance records, design documents and 3-D models, provides Jørgensen’s team with crucial insights not only on the current health of the bridge but also on the potential impact of changing environmental conditions.

The results have been game-changing for the team managing the Great Belt Link. IBM’s AI has accelerated and streamlined workflow processes, including the timing of inspections. It has also quickened the decision-making power of engineers in the field and allowed them to plan further ahead.

To its surprise, the Sund & Bælt team found that the Great Belt Link system, which had been expected to last 100 years, could significantly lengthen its lifespan using AI. “We now know that if we keep getting better information about the health of the concrete and steel, then we can reach 200 years,” says Jørgensen. Those extra hundred years will save the company the cost of new construction and reduce its carbon footprint by 750,000 tons, twice the mass of the Empire State Building—an achievement for both the business and the environment. “Thankfully, they go hand in hand,” Jørgensen says.

For SL Green, the promise of that win-win motivates the company’s ongoing pursuit of even more sustainable buildings. In the coming year, Vulaj says, the company will develop targets to achieve carbon neutrality at buildings like One Vanderbilt—both because the science demands it and because SL Green customers expect it. “Regardless of what the legal mandates are, we feel we have an obligation to reduce emissions and Boost the energy efficiency of our buildings,” she says. To ensure its success, SL Green is ditching its sea of spreadsheets and shifting to Envizi, an IBM software suite that will allow the company to manage all its ESG indicators—including energy use, carbon emissions, and environmental and social responsibility metrics—in one place, making it easier to analyze, operationalize, and report.

Once those systems are integrated, Vulaj hopes, One Vanderbilt will stand out even more in the New York City skyline for its green bona fides. But to create a truly sustainable world, buildings like One Vanderbilt will have to become more commonplace, which means more companies will have to supercharge their sustainability initiatives. According to Henisz, companies are currently spending $35 billion per year on financial data, but only $1 billion on ESG data. On one hand, he says, you could look at those figures and focus on the fact that ESG data is just 3 percent of the total spend on financial data. Or you could recognize, as he does, that “there’s a lot of runway to do more.”

Tue, 02 Aug 2022 05:34:00 -0500 en text/html
Killexams : Against all odds Infinidat turns profitable

Until a few years ago, Moshe Yanai was considered a serial entrepreneur with a golden touch, a Midas of Israeli high-tech. Indeed, Yanai has had a long career as one of the world's largest data storage experts, having been part of IBM's mainframe success and competitor EMC’s revolutionary Symmetrix product. In 2008, Yanai demonstrated his magic touch once again when he sold two start-ups he had founded - XIV and Diligent Technologies - to IBM, one after the other, for a total sum of about $500 million.

Yanai thus became one of the richest high-tech entrepreneurs of the early 21st century. In addition, he owns a helicopter pilot training school, as well as an executive helicopter that once belonged to Senator Ted Kennedy, which Yanai uses to pilot VIPs around the country.

So when Yanai chose to found Infinidat more than a decade ago, with the promise that it was not intended for sale to a technology giant, it became one of the most intriguing companies in Israel. Yanai, who throughout his career had developed ground-breaking storage solutions and served them on a silver platter to US corporations, now wanted to do things differently. He aimed to establish a revolutionary storage system, one that would significantly Boost the information storage capabilities of large enterprises, and would compete directly with the tech giants, all the way to an IPO.

No one imagined that within less than a decade the company would oust Yanai from its management, lose dozens of employees, and wrangle in the labor courts over legal charges by those former employees. No one could have imagined how, a year after that wave of departures, Infinidat would turn into a profitable company, with strong investor backing, and a new management that sees the potential Yanai envisioned from the outset.

Average deal $700,000 a year

Infinidat had an ambitious vision that perhaps was also its Achilles’ heel: a smart storage system capable of hopping between different types of storage using principles of artificial intelligence, algorithms, and mathematics. The aim was to reduce costs and raise workload application speeds for the enterprise. Underlying that vision is the same technology Yanai and his partners thought up a decade ago, currently protected by more than 100 patents. Infinidat is one of Israel’s biggest startups. It has raised $370 million in total, and employs about 500 people in Israel and the US.

Today, enterprises must choose between different types of storage: slow magnetic drives, flash-based solid-state drives (SSDs), and arrays of digital memory cells based on random access, (dynamic random-access memory or DRAM). The latter are fast, but their use, unfortunately, is much more expensive. Infinidat's algorithm learns the organization’s data flow - types of information and usage patterns - and knows to store it in the right place so that it can be accessed as needed, faster and more cheaply than the competition.

But Infinidat sells more than algorithms - it sells a complete system: flash drives, hard disks, its "Neural Cache" software that is the product’s smart core, and full-service company support - the "white glove" model of continuous performance monitoring and immediate troubleshooting. Today, the price of an average deal is about $700,000 per year, and can easily rocket into the millions of dollars.

"A premium product sold at high profit"

In September 2020. Shahar Bar-Or took up the post of GM of Infinidat Israel and Chief Product Officer. Since then, two complementary product lines have been developed: a flash drive system, for those wishing to further enhance their storage activity, and a new backup system that adds disaster recovery capabilities and cyber-attack resilience. The company declines to comment, but according to market estimates, its annual revenue rate now tops $300 million and it makes an operating profit.

In January of this year, the company announced to investors and employees that orders grew by 40% last year, with a 68% increase in the fourth quarter, compared with orders in the corresponding quarter in 2020.

Despite dreams of an IPO, the company is realistic. Just as things were going well for it, the market did an about-face, and New York IPOs have been shuttered for almost a year.

Infinidat’s cost structure is beyond belief: the company has about 500 employees in Israel, as well as anywhere in the world where it has a customer. It produces its systems in-house, maintains production lines at its Kfar Saba facility, not to mention a half-million-shekel monthly electric bill for the large server farm it leases from GDC of Herzliya, located down the street from its headquarters on Hamanofim Street.

"This is a premium product aimed at the largest customers in the world, and it’s sold at a high margin - it’s not an off-the-shelf product sold at a low profit. The company’s position as a privately-held, growing, and profitable company that has been working for several years with hundreds of large and important customers, allows us the flexibility to stay balanced," says Bar-Or.

The employees: Cancelled acquisitions, diluted shares

Unfortunately, many of the partners to this success no longer work for Infinidat. 2020 marked a turning point for the company, but this was preceded by a long period of turmoil accompanied by resignations, lawsuits, and changes in management. According to a lawsuit filed with Tel Aviv District Court and the Bat Yam Regional Labor Court by more than 30 of the company's original employees, it appears that already in early 2020, the company began reneging on its commitment to buy shares back from its employees - a plan first implemented in 2018 that was supposed to happen in 2020 as well. The employees claim this plan kept them waiting for many years, that the framework was supposed to last five years, that the company had committed itself to purchase 2% each year of the special share capital issued to each employee, priced at approximately $1,300 per share.

According to the employees, the management refrained from providing proper information on the matter. It was then discovered that the company also stopped reporting regularly to the Registrar of Companies about other changes made to its share structure, statutes, and board of directors. Upon investigation, the employees discovered that the purchase plans actually diluted the remaining special shares in their possession. Concurrent with the repurchase plan, special employee shares were issued to the Claridge fund (the Bronfman family), and the ION fund, which were protected against dilution. This ultimately diluted the employees to half the shares originally promised to employees only.

An examination conducted afterwards by a few veteran company employees made clear that their situation was even worse. The share series issued especially for them - which was supposed to provide them with protection from dilution so that, in case of a sale, IPO, or liquidation, they would the first receive 20% of the proceeds - had been changed continually without their knowledge since 2015, as new investors had come into the company. This paralleled the company's decision in 2020 to dilute its former employees to one-thousandth of their holdings.

Legal proceedings

According to the claimants, the original commitments to the employees were initiated by and made in the presence of super-entrepreneur Yanai, to persuade them to continue working at the company for years. In one example cited in the indictment, Yanai is even quoted as raising the notion that, should the company be sold for $1 billion, its employees would receive $200 million.

"In addition to the fact that harm was done to the employees, it was done covertly and only came to light later, after many years and thanks to the resourcefulness of the employees," attorney Yaron Alon of Horovitz, Even, Uzan & Co., who represents a large group of employees, told "Globes". A similar lawsuit has been filed by Gad Ticho and Alon Kanety of Caspi & Co. A significant number of the claimants were senior executives, some of whom had been with the company for years. These include the person who was Yanai’s manager at Elbit and then left with him for EMC; many of the company’s first product architects, and vice-presidents of marketing, sales, development and product throughout the life of the company. Dr. Alex Winokur, who managed development at both XIV and Axxana, (a startup Winokur founded that was eventually acquired by Infinidat), is now in the process of negotiating with the company on the terms of payment due to him. All these proceedings are at different stages in the courts.

"I’m happy the company is doing well, but that success must be attributed to the veteran employees who contributed to its establishment, to the invention of its products, and to their development," says Adv. Alon. "Those employees worked solely in the light of the explicit promises they received about the shares that were to be allocated to them. We are confident that the Economic Affairs Court and the Labor Court will compel the company to meet its obligations."

Infinidat responded: "We believe that the claims are baseless, and in any case will be determined by the appropriate courts."

Upheaval, promotions, and growth engines

The upheaval happened in 2020, after years of the company hemorrhaging operating losses, estimated at tens of millions of dollars a year. The board decided to remove Yanai from the position of chairman (he remains an active company director to this day), and named Boaz Chalamish, founder and president of Clarizen, in his place, and Kariel Sandler as co-CEO and CPO and Nir Simon as co-CEO and CFO. As part of the long recovery process, the company raised tens of millions of dollars from TPG Capital, the Bronfman family’s Claridge fund, ION, and Goldman Sachs. The process also included a plan to dilute the holdings of former employees that, although it was put into effect only a few months later, caused employee resignations, along with employees sent on furlough due to the Covid-19, epidemic, and employees who were fired. In all, the company shed 70% of its workforce.

"In September 2020, we identified those core employees who could be given greater responsibility and we promoted them to more senior positions," said Bar-Or. "I looked for the team leaders who, despite the turmoil, had the courage and strength - some of them even approached me and said, ‘I'm not going anywhere'. The absolute majority of directors you’ll see in the company today are team leaders who have taken responsibility and advanced. Similarly, many of our team leaders today were engineers who took on additional responsibilities. Although we had a high turnover of managers, and the average seniority of management is one year, the company is anchored by product, technology, sales and support that have continued to support customers throughout this time. We hired experienced managers from outside, mainly from large companies, built plans for launching two product lines, and focused on new growth engines, like flash products and backup.

"During the first half of the year, I was losing sleep from the weight of care and responsibility resting on me, but after this period we could say that the company was stabilizing and that the existential threat had lifted."

How did you transition from loss to profit?

"We cancelled unnecessary projects, and had to think better about adjusting the workforce to our revenue level. Up until a half year ago, the term ‘profit’ wasn’t much used in the Israeli high-tech lexicon, but already in 2020, we had committed to ourselves to not spending more than we could afford. That’s considered old school. During the first half of 2021, it was hard, because our teams needed more people, and we wouldn’t allow that until we felt we were meeting our sales targets. Now, in mid-2022, as we go into a global economic crisis, we’re 'privileged ', because we’re already used to operating profitably. We have great conditions here, including fully stocked kitchens, every type of coffee machine, generous meal vouchers, events and activities - but we’re not the type to host extravagant performances or staff trips to exotic islands. We’ll invest in growth and our workers."

"I was excited by the challenge"

What was the moment when you said to yourself, "We've made it"?

"Towards the end of 2020, I saw that we’d succeeded in filling most of the critical positions through internal promotions and external recruitment. I also saw that the acceptance rate for our job offers had crossed the 90% threshold, which means that most of the candidates we interviewed, each considering several different job offers, decided to go with us. In addition, we saw the number of deals increasing rapidly. Up to that point, our competitors were doing unbelievable things, including going to one of customers and telling them we were about to go under. We had to persuade the customer that our competitor was mistaken - and that customer decided to believe us, and has stayed with us to this day. The investors were behind us all the way. Gilad Shany of ION told me: ‘I’m not in your position but I can guess what you’re going through. Even if you miss, know that I’ve got your back.’"

You came from a very stable job. What persuaded you to stick your head into "the hornet's nest" at that time?

"The more difficult the situation described to me, the more attracted I was. They told me about the technology, the people, and the revenue, which was impressive even then, but also about the lawsuits, the loss of trust and the departures, and how desperate the situation seemed for people. That excited me even more, because this was a bigger challenge than coming to a company where everything was okay. Even though, almost every day I was asked at home ‘What were you thinking?’ or ‘What have you done?’, I saw the opportunity in a company with both technological and managerial challenges. After 15 years in corporate America, the combination of a large Israeli-American company with a great opportunity to bring value to Israeli high-tech attracted me. In retrospect, I’m grateful because this is a lesson you won’t learn if you don't roll up your sleeves and get to work."

Does the current economic crisis affect you?

"Since 2020, we’ve avoided unnecessary expenses. We’ve grown in a responsible manner, and we have no need or intention of cutting back. On the contrary: we have many open positions in Israel and in the world, and we’re hiring on the basis of our performance and increased sales. It’s true that in a difficult economic environment, companies are cutting back on many product purchases, but it is less likely that a senior executive at a major enterprise will cut back on storage at a time when the volume of information it collects doubles every period."


  • Sector: Data storage servers.
  • Executives: Global CEO Phil Bullinger, responsible for sales and service, and Infinidat Israel GM Israel Shahar Bar-Or, responsible for R&D and product.
  • History: Founded in 2011 by Moshe Yanai, the company has so far raised $370 million from investors such as Claridge and Goldman Sachs.
  • Employees: 490 people in some 17 locations worldwide, more than half of them at the development center in Herzliya.
  • One thing more: For two years, Infinidat Israel GM Shahar Bar-Or taught computer programming at a high school.

Published by Globes, Israel business news - - on July 31, 2022.

© Copyright of Globes Publisher Itonut (1983) Ltd., 2022.

Sun, 31 Jul 2022 23:46:00 -0500 en text/html
000-647 exam dump and training guide direct download
Training Exams List