Do not miss M6040-419 Exam Cram with Practice Test. Get from killexams.com

IT pros have created killexams.com IBM Certification dumps. Many students have complained that there are too many questions in many IBM SurePOS 500 Series models 5x6 Sales Mastery Test Prep and PDF Download and that they are simply too exhausted to take any more. Seeing killexams.com specialists create this comprehensive version of M6040-419 Exam Cram while still ensuring that every knowledge is covered after extensive study and analysis is a sight to behold. Everything is designed to make the certification process easier for candidates.

Exam Code: M6040-419 Practice test 2022 by Killexams.com team
IBM SurePOS 500 Series models 5x6 Sales Mastery
IBM thinking
Killexams : IBM thinking - BingNews http://www.bing.com:80/news/search?q=IBM+thinking&cc=us&format=RSS Search results Killexams : IBM thinking - BingNews http://www.bing.com:80/news/search?q=IBM+thinking&cc=us&format=RSS https://killexams.com/exam_list/IBM Killexams : Why IBM Is More Than Just a Dividend Stock No result found, try new keyword!IBM defied the market's first-half downturn thanks to its rising revenue and hefty dividend, currently yielding around 5%. But the tide turned after IBM announced second-quarter earnings in July. The ... Mon, 08 Aug 2022 22:51:00 -0500 text/html https://www.nasdaq.com/articles/why-ibm-is-more-than-just-a-dividend-stock Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Strengthen future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Strengthen quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Strengthen the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : IBM board probes claims of fudged sales figures that led to big bonuses for execs

Exclusive IBM's board of directors has started an investigation into claims that its sales numbers were manipulated, leading to executives securing big bonuses. If the board fails to take any action, it may face a lawsuit to claw back millions of dollars from top staff.

In late March, just days before IBM was sued for securities fraud, the IT giant's board received a demand letter from attorneys representing shareholders.

The letter, according to sources familiar with the matter, asked the board to investigate allegations that later surfaced in the securities lawsuit: that the company, under former CEO Ginny Rometty and current CEO Arvind Krishna, deceived shareholders by unlawfully manipulating mainframe revenues in a way that misled investors and inflated executive bonuses.

Our sources tell us that if the IBM board fails to deal with the allegations, a derivatives lawsuit is expected to follow in which the plaintiffs will try to claw back millions of dollars worth of bonus payments made to executives.

A shareholder derivatives lawsuit is brought by shareholders on behalf of a corporation. It is filed against corporate leaders – company board members, officers, or others – alleged to have neglected their fiduciary duty.

We're told IBM's board has engaged a law firm to investigate the fraud allegations. If the board takes no action to address the supposed fraud, the plaintiffs should then be able file a derivatives claim in the company's name.

Assuming the court finds sufficient merit in the plaintiffs' claim to allow a derivatives case, and if the plaintiffs prevail, most of any damage award would belong to the company – which would benefit shareholders, but would not go to them directly.

A legal scholar who spoke with The Register on background for lack of familiarity with this specific case said it's unusual for shareholders to present the board with a demand letter because unless you can show the board is conflicted or acting in self-interest, shareholders generally aren't allowed to initiate a derivatives case.

IBM did not respond to two requests to confirm or deny the existence of the demand letter.

Buried on page 38 of a 10-Q filing with the SEC last week, however, Big Blue disclosed it had received and responded to such a missive.

"On March 25, 2022, the Board of Directors received a shareholder demand letter making similar allegations [to the securities class-action lawsuit] and demanding that the company's Board of Directors take action to assert the company's rights," IBM noted in the submission, a detail that so far has gone unreported.

"A special committee of independent directors has been formed to investigate the issues raised in the letter."

The securities fraud claim [PDF] against IBM was filed on April 5 in New York, on behalf of the June E. Adams Irrevocable Trust. It names as defendants not only IBM, but current and former corporate leaders including Rometty, former CFO Martin J. Schroeter (now CEO of IBM spin-off Kyndryl), current CFO James J. Kavanaugh, and current CEO Arvind Krishna.

Since the lawsuit was initially filed by law firm Milberg Coleman Bryson Phillips Grossman, LLC, it has been joined by at least five other law firms representing other IBM shareholders. In June, the court recognized Iron Workers Local 580 Joint Funds as the lead plaintiff.

The complaint contends that IBM between April 4, 2017 and October 20, 2021 "improperly and in violation of Generally Accepted Accounting Principles ('GAAP') embarked on a fraudulent scheme to shift billions of dollars in revenues from its mainframe line of business to its Strategic Imperatives and CAMSS line of business."

... a fraudulent scheme to shift billions of dollars in revenues from its mainframe line of business to its Strategic Imperatives and CAMSS line of business

CAMSS is an abbreviation for Cloud, Analytics, Mobile, Security and Systems, business segments that were designated as strategic imperatives by IBM's leadership. The complaint argues that IBM instituted a bonus scheme that rewarded execs and encouraged IBM salespeople for the sale of CAMSS products. As a result, revenue arising from mainframe sales got reclassified as CAMSS sales, which boosted bonuses even as it misled investors – by giving shareholders an untrue picture of the IT giant's sales performance – it is claimed.

The Register spoke with two former IBM sales employees who were unaffiliated with the litigation and had between them more than forty years of experience with Big Blue. They described manipulative sales reporting – not all of which is necessarily unlawful – as a common practice, not only at IBM but at other large enterprise software firms.

"Think of it as, like, the worst kept secret," said one, who described one way IBM salespeople adjust sales figures to their own advantage. "It all starts with the CRM system, the customer relationship management system. IBM uses SugarCRM, but they make it very easy when you get a deal.

"You go through a bunch of checkboxes and you check off which categories pertain to this deal and how much of it is services, how much of it is hardware, how much in particular is cloud-based or analytics. And this is the big thing with the CAMSS, right? To check off all the boxes pertaining to CAMSS and then you allocate a percent to that."

That is to say, you only have to assign a small part of the sales deal to CAMSS to record it as a CAMSS win.

The other described various dubious directives that salespeople had to comply with, which steered salespeople toward meeting management goals and discouraged rocking the boat.

For example, this individual described IBM Z Linux part number manipulation. "A very common practice was to create a duplicate part number," this former IBMer explained. "It's a unique part number but there's no difference in product or delivery."

That makes no difference to the customer, we were told, but the way products got categorized affected sales staff and executive compensation.

The issue before the court in New York is whether flexible accounting of this sort, to the extent it can be documented, violated the law or IBM misled investors. If IBM's board finds no corrective actions are necessary, a successful derivatives complaint could return millions paid in unwarranted executive bonuses to company coffers. ®

Editor's note: This article was updated to clarify that the demand letter was sent in late March, just before the securities lawsuit was filed in April, as confirmed by the 10-Q filing.

Tue, 02 Aug 2022 07:27:00 -0500 en text/html https://www.theregister.com/2022/08/01/exclusive_ibm_board_of_directors/
Killexams : The risky new way of building mobile broadband networks, explained by Rakuten Mobile CEO Tareq Amin

In 2019, the Trump administration brokered a deal allowing T-Mobile to buy Sprint as long as it helped Dish Network stand up a new 5G network to keep the number of national wireless carriers at four and preserve competition in the mobile market. You can say a lot about that deal, but it happened. And now, in 2022, Dish’s network — which is called Project Genesis, that’s a real name — is slowly getting off the ground. And it’s built on a new kind of wireless technology called Open Radio Access Network, or ORAN. Dish’s network is only the third ORAN network in the entire world, and if ORAN works, it will radically change how the entire wireless industry operates.

I have wanted to know more about ORAN for a long time. So today, I’m talking to Tareq Amin, CEO of Rakuten Mobile. Rakuten Mobile is a new wireless carrier in Japan. It just launched in 2020. It’s also the world’s first ORAN network, and Tareq basically pushed this whole concept into existence.

Tareq’s big idea, an Open Radio Access Network, is to break apart the hardware and software and make it so that many more vendors can build radio access hardware that Rakuten Mobile can run its own software on. Think about it like a Mac versus a PC: a Mac is Apple hardware running Apple’s software, while a PC can come from anyone and run Windows just fine or run another operating system if you want.

That’s the promise of ORAN: that it will increase competition and lower costs for cellular base station hardware, allow for more software innovation, and generally make networks faster and more reliable because operators like Rakuten Mobile will be in tighter control of the software that runs the networks and move all that software from the hardware itself to cloud services like Amazon AWS.

Since Rakuten Mobile is making all this software that can run on open hardware, they can sell it to other people. So Tareq is also the CEO of Rakuten Symphony, which — you guessed it — is helping Dish run its network here along with another network called 1&1 in Germany.

I really wanted to know if ORAN is going to work, and how Tareq managed to make it happen in such a traditional industry. So we got into it — like, really into it.

Okay, Tareq Amin, CEO of Rakuten Mobile. Here we go.

Tareq Amin is the CEO of Rakuten Mobile and the CEO of Rakuten Symphony. Welcome to Decoder.

Thank you, Nilay. Pleasure being with you.

I am excited to talk to you. Rakuten Mobile is one of the leaders in this next generation of wireless networks being built and I am very curious about it. It is in Japan, but we have a largely US-based audience, so can you explain what Rakuten is? What kind of company is it, and what is its presence like in Japan?

The Rakuten Group as a whole is not a telecom company, but mostly an internet services company. It started as one of the earliest e-commerce technology companies in Japan. Today, it is one of the largest in e-commerce, fintech, banking, travel, et cetera. These significant internet services were primarily built around a massive ecosystem in Japan, and the only missing piece for Rakuten as a group was the mobile connectivity business. That is why I came to Japan, to help build and launch a disruptive architecture for its mobile 4G/5G network.

Let me make a really bad comparison here. This company has been a huge internet services provider for a while. This is kind of like if Yahoo was massively successful and started a wireless network.

Correct. I mean, think of Amazon. What would happen if Amazon launched a mobile network in the US? This is the best analogy I could give, because Rakuten operates at that scale in Japan. This company with a disruptive mindset, disruptive skill set, disruptive culture, and disruptive organization endorsed my super crazy idea of how we should build this next-generation mobile infrastructure. I think that is where I attribute most of the success. The company’s DNA and culture is just remarkably different.

So it’s huge. How is it structured overall? How is Rakuten Mobile a part of that structure?

Of all the entities today, I think the founder and chairman of the company, Mickey [Hiroshi “Mickey” Mikitani], is probably one of the most innovative leaders I have ever had the opportunity to work with. I cannot tell you how much I enjoy the interactions we have with him. He is down to earth and his leadership style is definitely hands-on; he doesn’t really operate at a high level.

The fundamental belief of Rakuten is around synergistic impact for its ecosystem. The company has 71 internet-facing services in Japan — we also operate globally, by the way — and you as a consumer have one membership ID that you benefit from. The points/membership/loyalty is the foundation of what this company works on. Regardless of which services you consume, they are all tied through this unique ID across all 71.

The companies and the organizations internally have subsidiaries and legal structures that would separate all of them, but synergistically, they are all connected through this membership/points/loyalty system. We think it is really critical to grow the synergistic impact of not just one service, but the collective services, to the end consumer.

Today, Rakuten Mobile is a subsidiary of the group, and Rakuten Symphony is more focused on our platform business. It focuses on the globalization of the technology and architecture we have done in Japan, by selling and promoting to global customers.

When you say Symphony, do you mean the wireless network technology or the technology of the whole company?

Symphony itself is much more than just wireless. Of course, it has Edge Cloud connectivity architecture, the wireless technology stack for 4G/5G, and life cycle management for automation operations. In August of last year we launched Rakuten Symphony as a formal entity to take all the technology we have now and promote it to a global customer base.

I think one of the reasons you and I are having this conversation is because Dish Network in the United States is a Symphony customer. They are launching a next-generation 5G network and I have been very curious about how that is going. It sounds like Symphony is a big piece of the puzzle there.

To deliver you a bit of background, maybe we should start with the mobile business in Japan, because it is the foundation this idea initially started from. So, I would tell you, I have had a super crazy life. I am really blessed that I had the opportunity to work with amazing leaders and across three continents so far. My previous experiences before coming to Japan, which involved building another large greenfield network in India called Reliance Jio, have taught me quite a bit.

To be very frank with you, it taught me the value of the US dollar. When you go into a country where the economy of units — how much you could charge a consumer — is one to two US dollars, the idea of supply chain procurement and cost has to change. You have to find a way to build cost-efficient networks.

The launch of Reliance Jio was very successful and became a really good Cinderella story for the industry. I am extremely thankful for what Jio has taught me personally, and I have always wondered what I would do differently if I had a second opportunity to build a greenfield.

To deliver everybody listening to this podcast some perspective, the mobile technology industry has been about nothing but hardware changes since the inception of the first 1G in 1981. You just take the old hardware and replace it with new hardware. Nothing has changed in the way we deploy networks when the Gs change, even now in 2022. It is still complex and expensive, and I don’t think the essence of AI and autonomy exist in the DNA of these networks. That is why when you look at the cost expenditures to build new technology like 5G, it is so cost-prohibitive.

It was by coincidence that I met the chairman and CEO of Rakuten group, Mickey Mikitani, and I loved everything that Rakuten is all about. Like most people, I didn’t necessarily know who Rakuten was at the time. I only knew of them because I love football (soccer) and they were a big sponsor of FC Barcelona.

When Mickey started explaining the company fabric to me, about its DNA and internet services, I thought about what a significant opportunity he would have if he adopted a different architecture in how these networks are deployed — one that moves away from proprietary hardware. What would happen if we remove the hardware completely and build the world’s first, and only, cloud-native software telco?

Let me be really honest with you, this was just in PPT at the time. I conceived the idea thinking about what I would do differently if I were granted another opportunity like Reliance Jio. One of the first key elements I wanted to change is adopting this unique cloud architecture, because nobody had really deployed an end-to-end horizontal cloud across any telco yet.

The second element — which you have probably heard of because the industry has been talking about it excitedly — is this thing called Open RAN, which is the idea of disaggregating hardware and software. The third element, my ultimate dream, is the enablement of a full autonomous network that is able to run itself, fix itself, and heal itself without human beings.

This is the journey of mobile, and I think this is what differentiates us so much. I can’t say I had a recipe that defined what success would look like, but I was obsessed. Obsessed with creating a world-class organization with a larger ecosystem, and getting everybody motivated about this concept that did not exist four years ago.

Now here we are, post commercial launch. The world is celebrating what we have done. They like and enjoy the ideas around this disaggregated network, and they love the concept of cloud-native architecture. What I love the most is that we opened up a healthy debate across the globe. We really encourage and support what Dish is doing in the United States by deploying Open RAN as an architecture. I think this is absolutely the right platform to build resilient, scalable, cost-effective mobile networks for the future.

That is the high-level story of how this journey started with a super crazy, ambitious idea that nobody thought would succeed. If you go back four years to some of the press releases that were published, I cannot tell you how many times I was told I’m crazy or that I’m going to fail. As I said, we became fanatic about this idea, and that is what drove us all to emotionally connect to the mission, the objective. I am very, very happy to see the results that the team has achieved.

I want to take that in stages. I definitely want to talk about Jio, because it is a really interesting foundational element of this whole story. I want to talk about what you have built with O-RAN, and how that works in the industry. I also want to talk about where it could go as a platform for the network providers. But I have to ask you the Decoder question first. You have described your ideas as super crazy like five times now. You are the CEO of a big wireless provider in Japan, and you are selling that stuff to other CEOs. I have to ask you the Decoder question. How do you make decisions?

I know this might sound a little controversial, but I have to tell you. In any project I have taken, even from my early days, we have always been taught that you have to have a Plan A and a Plan B. This has never worked for me. I have a concept I call, “No Plan B for me.”

I don’t go in thinking, “This project will fail, therefore I need to look at alternatives and options,” so I am absolutely not worried about making big, bold decisions. I live by a basic philosophy that it is okay to fail sometimes, but let’s fail fast so we can pick ourselves up and progress. I am not saying people shouldn’t have Option A and Option B. I just feel that, for me personally, Option B might deliver my mind the opportunity to entertain that there is an escape clause. That may not necessarily be a good thing when working on ambitious projects. I think you need to be committed to your beliefs and ideas.

I have made some tough calls during my career, but for whatever reason, I have never really been worried about the consequences of failure. Sometimes we learn more from the mistakes we make and from having difficult experiences, whether they are personal or professional. I think my decision-making capability is one that is very bold, trying to make the team believe in the objectives that we are trying to accomplish and not worrying about failure. Sometimes you just need to be focused on the idea and the mission. Yes, the results are important, but that is not the only thing I am married to.

This is how I have operated all my life, and so far, I am really happy with some of the thinking I have adopted. I am not saying people should not have options in their lives, but this idea of “no Plan B” has its merits in certain projects. How can you adapt your leadership style when approaching projects, rather than thinking, “What is the other option?”

I think with deploying millions upon millions of dollars of mobile broadband equipment, it often feels like you have got to be committed. Let’s talk about that, starting with Jio. If the listeners don’t know, Reliance Jio is now the biggest carrier in India. It is extremely popular, but it launched as a pretty disruptive challenger against other carriers of 4G like Airtel. You just gave it away for free for like the first six months, and it has been lower-cost ever since. This is not the new idea though, right? It is not the open hardware-software disaggregated network that you are talking about now. How did you make Jio so cheap at the beginning?

I will tell you a one-minute prelude. I was sitting very comfortably in Newport Beach when I got a call from my friend. He asked me if I would be interested in going to India and being part of a leadership team to build this ambitious, audacious idea for a massive network at scale, in a country that has north of 1.3 billion people. My first reaction was, “What do I know about India? I have colleagues, but I have never really been there.”

It seemed like an interesting opportunity, and he encouraged me to go meet the executive leadership team of Reliance Jio. I remember flying to Dallas to have a conversation with three leaders that I didn’t really know at the time. One of them in particular, I have to tell you, the more he talked, the more I just wanted to listen. I was amazed by his ambition for what he wanted to achieve in the country.

What was his name?

Mukesh Ambani. I have learned quite a bit from him. India was ranked 154th in the world in mobile broadband penetration before Reliance Jio. The idea was, “Can we assemble an organization that brings ubiquitous connectivity anywhere and everywhere you go across the country? Can 1.3 billion people benefit from this massive transformation that offers cutting-edge services?”

At the time, LTE was the service that Jio launched with. I was really amazed by this ambition and how big it was. I said, “This is an opportunity I just cannot pass up.” It was much bigger than the financial reward; it was an opportunity of learning and understanding. I truly enjoy meeting different cultures. The more I interact with people from different parts of the world, the more it fuels the energy inside me.

So I picked myself up and I moved to India. I landed in the old Mumbai airport, and when I powered on my device, I saw a symbol I hadn’t seen in the US for a decade — 2G. I knew the opportunity Jio had if we did this right. I mean, think about it. 2G. What is really the definition of broadband? 256 kilobits per second? That’s not internet services. The foundation of Jio started with this.

I will tell you the big things that I have learned. Most people think the way you achieve the best pricing is through a process called request for proposals and reverse auctions, to bring vendors and partners to compete against each other. Sometimes there is a better way to do this. You find larger companies where the CEOs have emotion and connection to the idea that you are building, and are willing to work with you as a true partner.

One of the key, fundamental pillars I learned from Jio is that not everything is about status quo. How you run provider selection, vendor selection, or requests for proposal, everything starts from the top leadership of partners you select. They need the ability to connect with the emotional journey — because it is an emotional journey after all — to do something at the scale of what Jio wanted to do. One of the biggest lessons I learned is the process of selecting suppliers who are uniquely different.

In terms of building a network at a relatively low cost, I will explain how this Open RAN idea came in. During my tenure at Jio, I really started thinking that in order to build a network at scale, regardless of how cheap your labor is, you need to fundamentally change your operating platforms for digitization. Jio would have north of 100,000 people a day working in the field, deploying sites. How do you manage them — deliver them tasks, check on the quality of installation they do, and audit the work before you turn up any of the bay stations, sites, or radio units?

I have driven this entire digitization and the digital workflows associated with it to connect everybody in India, whether it is Jio employees, contractors, or distributed organizations. Up to 400,000 people at any instant of time would come to the systems that my team has built. That changed everything. It changed the mentality of how we drive cost efficiency and how we run the operations.

This is where I would tell you that big building blocks started formulating in my mind around automation and its impact to operational efficiency if you approach it with a completely fundamental point of view from the current legacy systems you find in other telcos. Because of the constraint of financial pressure on what we call the average revenue per user, the RPU, which is the measurement of how much you charge a mobile customer, I wanted to find a different way to deploy the network.

When you build a network like Jio that has to support 1.3 billion, it’s not just about these big, massive radio sites you deploy. We need things called small cells, which are products that look like Wi-Fi access points, but you deploy lots of them to achieve what we call a heterogeneous design, a design that has big and small sites to meet capacity and coverage requirements.

I prepared an amazing presentation about small cells to the leadership team of Jio and I thought I kicked it out of the park. But then I was asked a question I have never heard in my life. Imagine! I am a veteran in this industry and have been doing this for a very long time. Someone said, “Tareq, I love your strategy. Can you tell me who the chipset provider is for the small cell product?” I’m like, “What are you talking about?” I have never been asked such a question by any operator that I have ever worked for outside of India.

I was told, “Look, Tareq, money doesn’t grow on trees in India. You need to know the cost. To know the cost, you must understand the component cost.” That was the first building block. I said, “Okay, next time I come to this meeting, I am not going to be uneducated anymore.”

I took on a small project which, at the time, did not seem audacious to me. I said, “Look, if I go to an electronics shop in the US, like a Best Buy, I could buy a Wi-Fi access point for $100. If I buy an enterprise access point from a large supplier, it costs $1,000.” I wanted to know what the difference is, so I hired five of the best university graduates one could ask for, and I asked them a trivial question. “Open both boxes, write the part numbers.” I had a really great friend at Qualcomm, and I remember this gentleman saying, “Tareq, you are becoming too dangerous.”

Right. You are the network operator. You’re their margin.

That is where everything started clicking for me. The chairman of Jio was not afraid to think the way I wanted to think, so I told him, “Look, I want to build our own Wi-Fi access point. If we buy an access point at $1,000, I am now convinced I could get you an access point at sub-$100.” A year later, the total cost of the Wi-Fi access point we built in Jio was $35.

This delta between $1,000 and $35 translates to a substantial amount of money saved, and it started by disaggregating everything. Jio enabled its cost structure, and it was able to offer it for free because it had an amazing partnership with suppliers that secured great business terms. Simplification of technology, LTE only, and an amazing process for network rollout all played huge factors in lowering the cost and economics for Jio.

Let me ask you more about that. Jio is a transformative network, and is now obviously the most popular in India. You were able to offer a much lower-cost product than the traditional cell providers with what sounds like very clever business moves. You went and negotiated new kinds of provider agreements and you said, “We have to actually integrate our products, find lower chips at cost, and make our own products. We have to build a new, efficient way to deploy the network with our technicians.”

To your credit, those are excellent management moves. At their core though, they are not technology moves. Now that you are onto Rakuten and saying you are going to build O-RAN, that is a technology play. Broadly, it sounds like you are going to take the management playbook that made Jio work, and now you are lowering costs even further with the technology of O-RAN — or you are proving out a technology that will one day enable further lower costs.

There were two things I could not do in Jio, and it’s not really anybody’s fault, the timing just wasn’t right. If you look at building a mobile network, I think everybody now more or less understands that you need antennas, bay stations, radio access, and core network infrastructure. But unless you are in this industry, you don’t realize the complexity of the operation tools that one needs in order to run and manage this distributed massive infrastructure.

The first thing I wanted to change in Jio is the traditional architecture. This management layer is called OSS [operation subsystems], and it is archaic, to put it politely. If you work in an adjacent vertical industry such as hyperscalers, an internet-facing company, you will be scratching your head saying, “I cannot believe this is how networks are managed today.”

Despite the elegance of the Gs and changing from one to five, the process of managing a network is as archaic as you could ever imagine. The idea of a true customer experience management is aloof; it is still a dream that nobody has enabled. The first thing I wanted to do is to change the paradigm of having thousands of disaggregated toolsets to manage a network into a consolidated platform. It was an idea that I couldn’t drive in Jio. I will tell you why that is even more important than Open RAN. These building blocks are for new architecture, the next generation of OSS.

If we build these operation platforms on a new modern architecture that supports real-time telemetry, the idea is to get real-time information about every element and node that you have into your network. Being able to correlate them and apply AI machine learning on top of them requires modern-age platforms. It is so critical to my dream.

Our success will not be celebrated because of Open RAN, but the grander vision of having Rakuten talked about as a company that does what Tesla has done for the electric industry in terms of autonomy. Autonomy in mobile networks is an absolutely amazing opportunity to build a resilient and reliable network that has better security architecture and does not need the complexity of the way we run and manage networks today. That was the first building block.

The impact of these big building blocks is massive. Here is the second thing I couldn’t do in Reliance Jio at the time. If you look at a pie chart on the cost structure for mobile networks, you may say, “Where do we spend money?” Regardless of geography, regardless of country, 70 to 80 percent of your spending always goes into this thing called radio access. Radio access today has been a private club that is really meant for about four or five companies, and that’s it. There is no diversification of the supply chain. You have no option but to buy from Ericsson, Nokia, Huawei, or ZTE. Nobody else could sell you the products of radio access.

The radio access products are the base stations?

Correct. Those are the base stations.

Which are the components of the cell tower?

Yes, and they contribute to about 70 percent of the CapEx [capital expenditure]. They are the one area that no startup has ever embraced and said, “You know what? Why don’t we try to disaggregate this? Why don’t we start to move away from the traditional architecture for how these space stations are deployed? Instead of running on custom hardware, custom ASICs, let’s use true software that runs on commodity appliances equivalent to what you would find inside data centers.”

This concept has been talked about, but nobody was willing to take the risk in any startup. Maybe I was wrong that your job is secure if you pick a traditional vendor. That is what I was thinking through, four years ago.

This is like “Nobody ever got fired for buying IBM.”

Something like that.

Let me ask you this. Is it because the initial investment is so high? There are not many startup wireless networks in the world. When they do start, they need an enormous amount of capital just to buy the spectrum. Are the stakes too high to take that kind of risk?

I think as an industry, we make the mistake of not rewarding and supporting startups the way we should. Our ability to incubate and build a thriving ecosystem that is built on new innovations, ideas, and startups is still a dream. I do not think anyone in telecom would argue with that. The reality is that everybody wants to see it happening, but we are just not there yet.

It was complex to do what we did in Japan. It was not simple, nor was it easy. When you have a running network carrying massive amounts of traffic, of course there are risks that you are going to have to take. The risk in that case is ensuring that you don’t disrupt your running base with poor quality services. Maybe the fear in people’s minds is that this technology is not ready, or integrating it into their networks is too complex, or they don’t have the right skillset to go into a software-defined world where they will need to upscale or hire new organization.

You said that right now the four vendors are Ericsson, Nokia, Huawei, and ZTE. You have moved to Open RAN, open radio access, in Japan. Do you have more vendors than those four? Are you actually using the commodity hardware with the software to find network? Or is it still those four vendors but now you can run your code on them?

The foundation of success for Rakuten Mobile today started by Rakuten itself enabling and acquiring one of the most destructive companies in this Open RAN space. We bought a company in Boston called Altiostar, and I thought they had everything one could dream about, except nobody was willing to deliver them a chance. I diversified my hardware supply chain and purchased hardware through 11 suppliers. I mandated where manufacturing can happen, in terms of product, security, and chipsets. Also, the era that we entered focused on heightened security, especially around 5G. I felt really good about our ability to control manufacturing and supply chain.

The software Altiostar provided was the radio software for this entire open access network in Japan. Altiostar software is now running over 290,000 radiating elements. I mean, this is massive; 98 percent of the population coverage of Japan is served there.

I deliver huge credit to the large vendors. Nokia had a very big internal debate when I told them, “I want to buy your hardware, but not your software.” I know their board had to approve it, but this is the beauty of software disaggregation. Now, I buy one hardware aspect of the Nokia and Altiostar is running the radio software for that platform. We now have a diversified supply chain and we are no longer just counting on four hardware suppliers. We have a common software stack. The big building block, which is this OSS, has enabled our own platforms and tools.

Rakuten has purchased Altiostar from Boston. We have purchased an innovative cloud company in Silicon Valley called Robin.io for our Edge Cloud. We have purchased the OSS company called InnoEye and formulated this integrated technology stack that is now part of Rakuten Symphony.

You have described Rakuten’s network as being in the cloud several times. Very simply, what does it mean for a wireless network to be cloud-based?

To deliver you an image, four years ago I was asked to do a keynote in Japan on my first day there. Thanks to my translator, I think people understood the concepts I was explaining to them. I said, “Here is an image of what we don’t want to build.”

If I show you how to deliver voice and video messaging, most of the telecom networks across the world, even today, are still running into boxes of hardware. Having a cloud network means that your workloads are now moved away from proprietary implementation, to a complete network function software components. These software components run with the beauty of what is called microservices for software, and run with the elegance of things that cloud inherently supports, like capacity management, auto-elasticity, scale in, and scale out.

This is basic terminology. I’m not telling you about things that have been invented by Rakuten Mobile. It is thanks to Google, Microsoft, and Amazon, who have innovated like crazy on the cloud. I have just benefited from the innovation that they have done to deliver on scalability, resiliency, reliability, and a cost efficiency that one could never have imagined.

When it comes to the cost, this is a hyper-operation structure. There are 279,000 radiating elements, and the operational hit count in Rakuten Mobile is still sitting below 250 people.

That’s crazy.

As the number increases, there is no direct proportionality between the number of units in the network versus the number of employees in the network. There is absolutely no direct correlation whatsoever anymore. To me, that is what cloud is all about. All the things on top of it are modules that you need to derive to the operational efficiency that we did in Japan.

From an end user perspective, you have now architected this network differently. You have created a small revolution in the wireless industry from the provider level, where you can buy any hardware from 11 suppliers and run your software on it. Does the end user see an appreciable difference in quality? Or does it just lower the cost?

There is a huge difference from the end user point of view. One of the key reasons that Rakuten was encouraged and supported was because we were determined to enter the mobile segment in Japan. We felt that competition was stagnant, and the cost per user is one of the most expensive in the world.

To benefit the end consumer, we took a chapter from Jio’s strategy on lowering the cost burden economically. We did something that was so simple. At the time, the average plan rate in Japan was sitting about $100 US per user. We dropped that cost to $27 US, unlimited, no caps. When you go inside our stores, we change everything. We said, “Look, you don’t need to think about the plans. There is only one plan. That’s it.”

From a choices point of view, we made life super simple. We bundled local, we bundled international, we bundled everything under one local plan, and we tied it synergistically to the larger ecosystem of Rakuten. You acquire points as you buy things on e-commerce, as you buy things on our travel website, as you buy things from Rakuten Energy, or as you subscribe to Rakuten Bank. You could then use these points to pay off your cellular bill. The $27 could effectively be zero, because of the synergistic impact of other services you consume in Rakuten and the points you acquire from all of them.

Would Rakuten Mobile be profitable at $27 a customer? Is it being subsidized by the larger Rakuten?

We have to be profitable. Spectrum here is not auctioned in Japan; we are allocated spectrum, but there are conditions to it. You cannot just run a business that is not profitable standalone. So we will break even in Rakuten Mobile and make it standalone.

The way I think about it, it is not subsidized by the ecosystem. If I acquire you as a mobile customer, because of the impact I could bring to that larger sales contribution of you potentially buying from e-commerce or travel, I am using connectivity to empower the purchases of these 70-plus internet services, so we are actually contributing to the larger group. As long as the total top line revenue is increased because of mobile contribution, the group as a whole is going to be in good shape.

Even with standalone mobile, we are committed to our break-even point. We need to make it a profitable standalone business. The group as a whole has remarkable synergistic impact in our business. That is the benefit in value.

Now there is another benefit on the network architecture. Today we talk about the essence of marketing with Edge. The definition is so simple. It is all about bringing content as close to your device as humanly possible, to bring content close to you. I would always argue, if you have nothing but virtual machines or network functions that are software, the ability for you to move these software components from large data centers and all the way to the Edge is trivial. Hardware reallocation becomes more complex.

When the Edge use cases in Rakuten Mobile get delivered, you are hopefully going to hear some very amazing news about the lowest latency in the world delivered over the 5G network. This is the beginning of what is possible for new use cases for the consumer.

Think of cloud gaming. It has never been successful, at least in wireless, because networks could not sustain the latency that it would require. Speed, in my opinion, is a stupid metric to talk about. We should talk about latency, latency, latency! How do you deliver sub-4-millisecond latency on a wireless network?

It hasn’t happened yet on licensed spectrum, but I think you are going to see it very soon. There is an advantage to this software architecture and the creation of new age applications for cloud gaming. Even as we talk, people are getting excited about the metaverse, which will need these use cases to come alive in the mobile fabric.

So you have talked about Open RAN, how you have built it, how you have architected the network for Rakuten Mobile, how you have new software layers, and how you have new hardware relationships. You are also the CEO of Rakuten Symphony, which is the company inside Rakuten that would then license all these components to other vendors. Dish Network in this country is one of those providers, and they are at the beginning stages of trying to build a brand new greenfield Open RAN 5G network. If you were going to build an Open RAN network in the United States, how would you do it?

My focus would probably be a lot different than many people would think. It is not about technology. I have never in my life approached a problem where I think technology is the issue. We do not deliver ourselves enough credit for how creative we are as human beings and our ability to solve complex problems.

The first thing I would start with is structure, organization, and culture. What is the culture you need to have to do amazing, disruptive things? When I moved to Japan, I didn’t know anything about it. I always knew that I wanted to visit, but I didn’t know about the complexities and challenges I would have to face. I mean, imagine being in the heart of Tokyo, being largely driven and supported by an amazing leadership team that says, “The world is your canvas, hire from anywhere.”

I have brought in 17 nationalities — relocated, not as expats, as full-time employees in our office in Japan. Being this diversified, multicultural organization was the key. I did my own recruiting and handpicked my team. My focus was initially to find people with the spirits of warriors, that were willing to take on tough challenges and the bruises that came along with them, that would not get discouraged by people telling them something would not work.

Long story short, I would not build a network that has looked the same for 30 years. I would not build a network just because Rakuten has done it this way. I think networks of the future must have this essence of software and must have autonomy built into its DNA. This is not just about Open RAN, this is a holistic approach for fundamental transformation in the network architecture.

I ask this question a lot and the answers always surprise me. Most companies that I think of as hardware companies, once they make the investment in software, they end up with more software engineers than hardware engineers. Is that the case for you?

I have no hardware engineers at all. None. I think from the beginning, this was done by design. I knew that I could create an ecosystem in hardware, and I don’t want to be in the hardware business. From a fundamental business model, I had enough credible relationships in this industry to cultivate and create an ecosystem for people that just enjoy being in hardware design. But that is not us; it is not our fabric, not our DNA.

The more I look at the world, the more I see the success of companies that have invested heavily into the right skill sets, whether it is from data science, AI, ML, or the various software organizations that they have built. This is what I thought we needed.

If you go to Rakuten Symphony’s largest R&D center in India, we now have over 3,500 people that only do software. To me, that is an asset that is unprecedented in terms of the extent of capability, what we could build, what we could deliver, and the scale that we could deliver at. I don’t want to invest in hardware. I just think that it is not my business.

Our investment is all about platform. I really enjoy seeing the advancements that we have enabled, though we are still early in this journey. I have a lot of other things I want to accomplish before I say that Symphony has succeeded.

Symphony is a first-of-its-kind company, since it is going to sell a new kind of operating platform to other carriers. Do you have competitors? Do you see this being the next turn of the wireless industry? Are we going to see other platform integrators like Symphony show up and say to carriers, “Hey, we can do this part for you. You can focus on customer service or fighting with the FCC or whatever it is that carriers do”?

To be very honest with you, I love the idea of having more competitors in this space. It challenges my own team to stay on top of their toes, which is really good. At the same time, having more entrants come into the space would help me cultivate the hardware ecosystem today.

Symphony is uniquely positioned; there are not a whole lot of people that could provide the integrated stack that Symphony has. Symphony’s biggest advantage is that it has a running, live lab carrying a large commercial customer base called Rakuten Mobile. Nobody tells me, “Don’t do this or that on Rakuten Mobile.” I could do disruptive ideas or disruptive innovation, and test and validate new products and technologies before giving them to anybody else.

It’s good to be the CEO of both.

I know. This is one of the reasons I accepted and volunteered. I thought for the short term, it would be important to be able to control these two ecosystems, because Japan is a quality-sensitive market. If I build a high-quality network, nobody will doubt whether Symphony’s technology stack is credible, scalable, reliable, or secure. We are uniquely positioned because of our ability to deliver on a robust automation platform, Open RAN software technology architecture, and innovative Edge Cloud software.

I don’t see many in the industry that have the technology capabilities today that Symphony offers. People have bits and pieces of what we have, but when I look at the integrated stack, I’m really happy to see that we have some unique intellectual properties and IPs that are remarkably differentiated from the market today.

So Dish is obviously a client. We will see how their network goes. Are you talking to Telefónica, Verizon, and British Telecom? Are they thinking about O-RAN in this way?

Since it’s public in the US, I can talk about it. As I mentioned before, it is not just about the O-RAN discussion for me, it is about the whole story. We announced in the last Mobile World Congress that AT&T is working with Rakuten Symphony on a few disruptive applications around the digital workflow on the operation for wireless and wireline, the same as Telefónica in the UK and Telefónica in Germany. Our first big breakthrough was an integrated stack.

In the heart of Europe, in Germany, we are the provider for a new greenfield operator called 1&1. I told the CEO of 1&1 that my dream is to build Rakuten 2.0 in Germany, so we are building the entire fabric of this network. It has been an amazing journey to take all the lessons learned from Japan and be able now to bring them to Germany. We are in the early stages, but I am really optimistic to see what the future will hold for Open RAN as a whole for Symphony.

Rakuten Mobile and Rakuten Symphony have opened a well-needed, healthy debate in the industry about radio access provider alternatives and diversification that we need in order to move away into a software-driven network. We feel that is a big accomplishment for us.

As you build out the O-RAN networks, one thing that we know very well in the United States is that our handset vendors — Apple, Samsung, Google, Motorola — are very picky about qualifying their devices for networks.

Oh yes.

Is there a difference in the conversation between a traditional network and an O-RAN network, when you go and talk to the Apples and Samsungs of the world?

Yes. Before we were approved as a mobile company to be able to sell their devices, I have to tell you about the pleasure of working with the likes of Apple. I’m being really honest about this; I really liked it. Their burden to quality was really high, as was their ability to accept and certify a quality of network. I thought if we got the certification that we needed from them, that’s another third-party audit; I would have cleared a big quality hurdle.

The Apple engineering team is really strong. They really understood the technology, which was great. There are a lot of facets to do with it that are fascinating. No matter how great it is, I had to pass a set of KPIs and metrics for device certification. This was not trivial. I went through the same journey with Jio, so I kind of have some ideas about the burdens to acceptance from large device manufacturing companies. I also knew that this is a process of identifying issues, solving them, coming back to the device vendors, and continuing to reiterate in improving the quality.

I went through the same journey in mobile, but just slightly after our commercial launch, when we got our commercial certification on being able to sell Apple devices, that was a big relief for all of us. A big relief, because it means that we have reached a quality level that they deem is minimally acceptable to carry the device.

Of course we monitor the quality every day, so I’m really happy that we have done this. We have proven that the Open RAN network, especially the software that we have built in Japan, is running with amazing reliability. Rather than celebrating our courageous attempt to do something good for everybody, the early days of our journey were all about skepticism. Like, “This will not work. This will not work.”

Was Apple more skeptical of your network going into tests than others since the technology is different?

The device vendors were very supportive. The skepticism came from the fear, uncertainty, and doubt from traditional OEMs and vendors who wanted to tell everybody that this technology is horrible. It was to such an extent I ignored everything. I still do today. I say you cannot argue the benefit of cloud brought to IT and enterprise. There is an indisputable benefit to this. When it comes to telco, why would you argue the advantage and benefit of moving all your workloads to the cloud?

I think this debate is ending, and it is ending much quicker and in a better place for everybody. I have huge admiration for what Apple has done. It’s a really impressive company. The more that we continue to engage with them, the more we can tell that this company is obsessed with quality. I thought if we cleared the hurdle of getting their acceptance, then it shows another validation for us that we are running a high-quality network. They are a strategic, critical part of our provider ecosystem today in Japan.

Let me flip this question around real quick. One of my favorite things about the Indian smartphone market is how wide open it is on the device side. This is something that happened after Jio rolled out, but I was friends with a former editor of Gadgets 360 in India, Kunal Dua, and he told me, “My team covers 12 to 15 Android phone launches a week.”

The device market is wide open, you can connect anything, there are dual SIMs, and the genuine consumer experience of picking a phone is of unlimited choice. That is not the case in the United States or in other countries. What do you think the benefits of that are? I am quite honestly jealous that there is that much choice in that market.

I think a couple of things in India really benefit the country quite a bit. When you have massive volume, people are intrigued to enter these economies that exist. Certain things have changed in Japan as well. The government policies are mandating the support for open device ecosystems.

In our case, we even told them that 100 percent of our device portfolio will support eSIM, which gives you the ability and flexibility to switch carriers within one second. You can just say, “Oh, I don’t like this. I like this.” The freedom of choices is just unparalleled. We, as Rakuten Mobile, changed the business model. We said, “Look, we will enable eSIM. There are no fees for termination of contracts. There are no fees for anything. If you don’t like us, you can leave. If you do like us, you are part of our family.”

We made it really simple, because it is a dream for us to build an open ecosystem. We are trying to see if it is relatively successful to open up a storefront for open device markets, since we own a very large e-commerce website. Come in, purchase, and acquire.

The difference between India and the US is that India does not subsidize the device. As a consumer in the US, you have been trained that you can buy an iPhone by signing a contract, and the iPhone will be subsidized by the carrier. A consumer could benefit from this open device ecosystem, but there would have to be a mentality change. Will a consumer accept the idea that they have to buy a device? From a carrier point of view, I still argue that if they don’t subsidize, maybe they could lower the cost of their tariffs.

It is still an evolution. For us in mobile, we have pretty much adopted what India has done. We said, “bring your own device,” and we promoted all these devices that you are talking about in India. We brought them into our e-commerce site. In Japanese, it is called Ichiba. So we brought them to the Ichiba website, gave them a storefront, let them advertise, and let them market. Our website has a massive amount of daily active users that come to it, and we do not necessarily benefit from selling their devices, but we don’t want to subsidize any device. That is subjective.

What is the biggest challenge of O-RAN? You have a long history in this industry. I’m sure many challenges are familiar to you in building a traditional network. What is the biggest, most surprising challenge of building it in this way?

Let me tell you the part that I was surprised about. Some parts were easier, some more difficult. If I take you to a traditional base station and we examine what is really there at this radio site, we will find that almost 95 percent of every deployment is the same. Basically, there is a big refrigerator cabinet, and inside this cabinet there is something called the base band. This is the brain of the base station. This base band was built on custom ASICs that large companies needed to constantly invest into this hardware development for.

The first thing that we have done is remove the software and move it into what is called cuts appliances, off-the-shelf appliances, like a traditional data center server. I recognize that the software only gets better; there are no issues with software. The difficult part was that the hardware components you need for the base station are really complex.

At every site, there is an antenna that has a transmitting unit, called either a remote radiohead, or massive MIMO in 5G. These products need to support a huge diversity of spectrum bands, because in every country there are different spectrum bands and different bandwidth. If you are a traditional provider — say Nokia, Ericsson, Huawei, ZTE — these companies have invested in a large organization, with tens of thousands of people, whose entire job is to create this massive hardware that could support all these diversified spectrum bands.

My number-one challenge with Rakuten Mobile is to find these hardware suppliers, because there are not a whole lot of them for Open RAN. The hardware suppliers that could support diversified spectrum requirements — because country to country it will be different — turned out to be a really big challenge. The approach that we have taken in Japan is to go to middle-size companies and startups. I funded them and encourage them to build the hardware that we need.

My biggest challenge and my biggest headache is spending time trying to find a company that has capability and scale to become the hardware provider for Open RAN at the right cost structure. The hardware you need for both 4G and 5G is not to be underestimated. I think it is easier to solve the issues around some of the RF units that one would need for these base stations. This is my personal challenge, and I know the industry as a whole needs to solve for this.

I know these are complicated products, but are these companies worried that it is a race to the bottom? Most PC vendors ship the same Intel Processor, the same basic parts, and they have to differentiate around the edges or do services for recurring revenue. We talk about this on Decoder all the time. The big four that you mentioned sell you the whole stack and then charge for service and support. That is a very high-margin business. If you commoditize the hardware and say, “I am going to run my own software,” do those companies worry it is just a race to the bottom?

Let’s differentiate between large companies and new entrants. I think new entrants in hardware are comfortable and content, understanding the value they provide by being commodity suppliers. Let me deliver you an analogy. Let’s say Apple uses Foxconn to manufacture its devices. I am sure Foxconn will not tell you they are unhappy about this business model. It has built their entire strategy around high-value engineering, high-yield, and high-capacity manufacturing, because that is how they make revenue. They do not bundle support services.

I found that the new age manufacturing companies I was looking for were companies like Foxconn. Companies that understand the new business model that I want to create.

The most amazing thing that the US, and some companies are probably not aware of, is the elegance that we have in the United States around silicon companies. It is amazing how they genuinely are one of the most innovative in the world in terms of capability. It still exists in the US; we still control this. Today, Qualcomm, Intel, Nvidia, Broadcom, and many other companies, provide a lot of technology in a way that is needed for these products. We go and build reference designs directly with the silicon companies, and then I take that reference design, go to a contract manufacturer, and say, “Build this reference design.”

This new way of working seems like the future. Hopefully one day, for the hardware supply chain ecosystem, many companies like Foxconn will start to exist and will appreciate the value they need to build hardware for all suppliers. Maybe Ericsson or Nokia will one day have to look and evaluate a pivoting opportunity to go into a software world that may have a much better valuation.

Look at the stock price of traditional telecom companies today. Look at the stock price of ServiceNow, a digital workflow tool. Look at the difference between them. One is a complete SaaS model; one lives on a traditional business model. I don’t think the market appreciates and recognizes that this may be the right thing to do.

It seems like it is inevitable. It is just a matter of time for traditional vendors to start pivoting. I want this hardware to be commoditized. It is very important. The value you compete on has to be software, it cannot be hardware.

Rakuten Mobile is only a couple years old. It is the fourth carrier in Japan, and you have 5 million subscribers. Japan is a big country. KDDI has 10X the subscribers. Is the ambition to be the number one carrier, like Jio became the number one carrier in India? How would you get there?

I am really proud about what we have done in Japan. I think for many people that have been through this journey of building networks, they will know it is not a trivial process. We had two pragmatic challenges.

First, we had to prove to the world that a new technology actually works and delivers on cost, resiliency, and reliability. That’s a check mark; done. That is not just me telling you today, but audited by a third party. Look at the performance, quality and reliability we do. Second, if you are in the mobile business, I think you have one area that new technology cannot easily solve for you. You need to have ubiquitous coverage everywhere and anywhere you go.

I am not sure if you have ever visited Tokyo, Japan, but you should know this is a concrete jungle. It’s amazing. The density that exists in an area like Tokyo, the subways and the coverage you have to provide for them, and the amount of capacity you have to cater for, is not trivial. In two years, we have been able to build a network to cater for 96 percent of Japan coverage. I have never seen the speed that a network could be built at, at this scale.

So our ambition is not to be a fourth mobile operator in Japan. It is by far to be a highly disruptive ecosystem provider in which we want to take the number-one position in this country. The approach we take here is very simple. We need to ensure that ubiquitous, high-quality coverage is delivered anywhere you go in Japan. We are almost there.

I’m not just talking about the outdoors. High-rises, indoor, deep indoor, basements, subways. Anything and everywhere you go, an amazing network must be delivered. And second is the point/membership/loyalty that I talked to you about earlier. We think that’s a huge differentiator from the competitors, just to bring a much bigger value, and being obsessed about the customer experience and the services that we have offered.

From being an infant, to where we are today, I am really happy about what the team has accomplished, but we have a lot of work that we need to focus on to finish the last remaining 3 percent of our build. That percent is extremely important to achieve the quality of coverage that we need to really be at par and better.

I know my cost today is 40 percent cheaper in running my network than any competitor in Japan. I have an advantage that is virtually impossible for anybody in Japan to compete against today around network cost structure. So that gives me a leg up on what we could do, what business models we could experiment with, and the actions that we will take. You will see us very decisive in our approach, because we don’t want just to be another carrier in Japan. We want to be leading mobile operators in this country.

All right, Tareq. That was amazing. I feel like I could talk to you for another full hour about this. Thank you so much for being on Decoder.

Thank you.

Tue, 09 Aug 2022 03:35:00 -0500 en text/html https://www.theverge.com/23297756/5g-rakuten-mobile-ceo-oran-cloud-network-decoder
Killexams : Welcome Our Robot Overlords: Why I Think AI Creative Apps Are About to Disrupt the Business of Content

Opinions expressed are solely those of the author and do not reflect the views of Rolling Stone editors or publishers.

I so perfectly recall the magical day I bought my first computer. I was about 13 years old and had scrimped and saved from my teenage moneymaking schemes to assemble a grand total of $115. At the local big box store, that would get me a Timex Sinclair 1000.

Now understand that I am a digital fossil, but if you’ll just imagine all of this through the early Eighties lens of a Stranger Things episode, it will start to come into focus. These were days when there wasn’t any dominant computer platform. Steve Jobs and Steve Wozniak were just coming out of the garage with the Apple 1. IBM earnestly presented its plain vanilla IBM 5150 (which we can only assume it didn’t realize was also the police Welfare and Institution Code). At the fringes were all kinds of nascent and primitive magical bits of kit from companies like Atari, Commodore and Sinclair. They were cheap, a bit brutalist and required a nerdy dedication to make work.

From the very first, I wanted one thing from my computer: I wanted a friend. I desperately wanted to teach that computer to talk to me. In an adolescence marked by movies like Wargames and episodes of Star Trek, I had developed a mind that wanted to see these dreams of a technological future come to life.

I wrote a conversation bot. I typed “Hello”and it responded, “Hi there, how are you?” Then, magic happened. It would look for words like “happy” or “sad” selections based on my 13-year-old responses and it would respond with “that’s great!” or “sorry to hear that!

The next summer I attended a weekend camp at MIT for kids and computers. Most of the kids wanted to create space games. But I was still chasing my talking computer friend — and I had ambitions beyond that. I wanted to teach my friend how to paint with pixels. If I could ever describe a scene from a movie to a computer, could it make the movie? The camp counselors at MIT, beleaguered undergrads making some financial aid money, weren’t going to get me there. Space warfare it was.

Flash Forward

Last January, I got a message from a friend. He had found a pulsing little community of code freaks who were using machine learning apps to make pictures based on a description or prompt. He sent me a few pictures. My mind was blown. They were amazing. Some were painterly, delicate brushstrokes and surprising compositions that seemed to be the undiscovered work of masters. Others were photographic, high-resolution images of strange characters or steampunk jewelry, with a deep and luscious depth of field.

Then began a month of sleep deprivation and family abandonment. I could do nothing beyond experimenting with this incredible new image creating “friend.” I tried feeding it fragments of poetry and song, which led to the creation of images I could never imagine but which were spot-on visual representations of narrative. I probed further — what happened if I wanted to create 25 variations of a logo or trial renderings of architectural space by Zaha Hadid? The results kept amazing me. Unexpected results would often bubble up, ranging from hilarious misunderstandings to strange interpretations — or just wrong guesses. But sometimes they were creative leaps that I hadn’t ever thought of.

The Rolling Stone Culture Council is an invitation-only community for Influencers, Innovators and Creatives. Do I qualify?

How did all of this work? One thing to understand is that it’s not creative intelligence. This is pattern matching, or maybe more appropriately pattern finding. These code engines have been exposed to massive datasets: famous art, artists, design movements, contemporary culture, architectural styles, historical events, and consumer information. The more the code can be exposed to and cataloged, the more raw materials it has. In most cases, it starts with visual noise: foggy static that the code chips away at like a sculptor, creating composition, shape and points of view. Then within it, based on the user input, the specifics of the image and style are revealed.

Similar tools abound: copywriting apps that could create blog posts, listicles and long-form writing; teaching apps that take a script and provide a virtual actor speaking and explaining it with convincing sincerity; musical scoring tools that translate a few twists of whimsical emotion and vibe knobs into complete pieces of a song.

So, for real, if you’re a graphic artist, a copywriter or a musician, is this robot coming for your job?

That’s a complicated question. The tech needs more development. Getting specific outcomes out of it isn’t always straightforward. But the quality of the output is impressive. The rate of advancement is a bit blinding. The big purveyors of creative toolsets are already moving fast to deploy this functionality. From word processors to photo retouching to film and game development software, I believe we are about to see an ability to promote the computer from tool to collaborator.

At that point, it does seem inevitable that what humans work on and what computers do will change. Concept art, project treatments, outlines, drafts, social media copy, thumbnail graphic creation, mood boards and elements of game-level design — these are already starting to be tasks that are being taken on by AI.

Humans still need to do the describing. While I think the computers will get there too, in their own way, I am still a believer in something ineffable in the human soul. Maybe because we are a crazy soup of evolution and weird world views, there is poetry, song, beats and ideas that silicon can’t quite get because being messed up in that tragically human kind of way is actually maybe the secret sauce.

In the meantime, I am joyfully playing with my creative robot “friends.” Maybe later on, when they are in charge, they will still come around and make time to play with me.

Mon, 08 Aug 2022 08:30:00 -0500 en-US text/html https://www.rollingstone.com/culture-council/articles/welcome-robot-overlords-why-i-think-ai-creative-apps-are-about-to-disrupt-the-business-of-content-1392809/
Killexams : IBM Uses Power10 CPU As An I/O Switch

Back in early July, we covered the launch of IBM’s entry and midrange Power10 systems and mused about how Big Blue could use these systems to reinvigorate an HPC business rather than just satisfy the needs of the enterprise customers who run transaction processing systems and are looking to add AI inference to their applications through matrix math units on the Power10 chip.

We are still gathering up information on how the midrange Power E1050 stacks up on SAP HANA and other workloads, but in poking around the architecture of the entry single-socket Power S1014 and the dual-socket S1022 and S1024 machines, we found something interesting that we thought we should share with you. We didn’t see it at first, and you will understand immediately why.

Here is the block diagram we got our hands on from IBM’s presentations to its resellers for the Power S1014 machine:

You can clearly see an I/O chip that adds some extra PCI-Express traffic lanes to the Power10 processor complex, right?

Same here with the block diagram of the Power S1022 (2U chassis) machines, which use the same system boards:

There are a pair of I/O switches in there, as you can see, which is not a big deal. Intel has co-packaged PCH chipsets in the same package as the Xeon CPUs with the Xeon D line for years, starting with the “Broadwell-DE” Xeon D processor in May 2015. IBM has used PCI-Express switches in the past to stretch the I/O inside a single machine beyond what comes off natively from the CPUs, such as with the Power IC922 inference engine Big Blue launched in January 2020, which you can see here:

The two PEX blocks in the center are PCI-Express switches, either from Broadcom or MicroChip if we had to guess.

But, that is not what is happening with the Power10 entry machines. Rather, IBM has created a single dual-chip module with two whole Power10 chips inside of it, and in the case of the low-end machines where AIX and IBM i customers don’t need a lot of compute but they do need a lot of I/O, the second Power10 chip has all of its cores turned off and it is acting like an I/O switch for the first Power10 chip that does have cores turned on.

You can see this clearly in this more detailed block diagram of the Power S1014 machine:

And in a more detailed block diagram of the two-socket Power S1022 motherboard:

This is the first time we can recall seeing something like this, but obviously any processor architecture could support the same functions.

In the two-socket Power S1024 and Power L1024 machines

What we find particularly interesting is the idea that those Power10 “switch” chips – the ones with no cores activated – could in theory also have eight OpenCAPI Memory Interface (OMI) ports turned on, doubling the memory capacity of the systems using skinnier and slightly faster 128 GB memory sticks, which run at 3.2 GHz, rather than having to move to denser 256 GB memory sticks that run at a slower 2.67 GHz when they are available next year. And in fact, you could take this all one step further and turn off all of the Power10 cores and turn on all of the 16 OMI memory slots across each DCM and create a fat 8 TB or 16 TB memory server that through the Power10 memory area network – what IBM calls memory inception – could serve as the main memory for a bunch of Power10 nodes with no memory of their own.

We wonder if IBM will do such a thing, and also ponder what such a cluster of memory-less server nodes talking to a centralized memory node might do with SAP HANA, Spark, data analytics, and other memory intensive work like genomics. The Power10 chip has a 2 PB upper memory limit, and that is the only cap on where this might go.

There is another neat thing IBM could do here, too. Imagine if the Power10 compute chip in a DCM had no I/O at all but just lots of memory attached to it and the secondary Power10 chip had only a few cores and all of the I/O of the complex. That would, in effect, make the second Power10 chip a DPU for the first one.

The engineers at IBM are clearly thinking outside of the box; it will be interesting to see if the product managers and marketeers do so.

Tue, 26 Jul 2022 05:54:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/07/26/ibm-uses-power10-cpu-as-an-i-o-switch/
Killexams : NASA and its IBM computers

The Real-Time Computer Complex (RTCC) is located at the NASA Mission Control Center in Houston, TX.

In 1962, the RTCC housed several IBM large-scale data processing mainframe digital computers.

Think of the RTCC as the computing brain that processes mountains of data to guide nearly every portion of a NASA spaceflight mission. Flight controllers and engineers in the Mission Control Center depended on the RTCC.

On April 11, 1970, a portion of the Apollo 13 command service module exploded while it was halfway to the moon. Numerous voices from flight controllers in the Mission Control room desperately attempted to ascertain how serious the situation was while communicating with the astronauts aboard the Apollo 13 command module.

NASA Flight Director Gene Kranz directs his Mission Control team by clearly and firmly saying, “OK, listen up … Quiet down, people. Procedures, I need another computer up in the RTCC.”

The quick thinking and resourcefulness of NASA flight controllers and engineers, along with the courage and professionalism of the Apollo 13 astronauts, resulted in their safe return to earth.

Credit for their safe return should also be acknowledged to the five high-performance IBM System/360 Model 75 computers in the RTCC.

About 16 years earlier, the 1954 IBM 704 digital mainframe computer operated using a low-level assembly language and a high-speed magnetic core storage memory, replacing the electrostatic tube storage used in previous IBM computers.

In 1957, Sputnik 1, Earth’s first artificial satellite, was tracked during its orbit around the planet by two IBM 704 computers.

In 1959, the IBM 1401 mainframe computer was built using a high-level programming language with FORTRAN (Formula Translation/Translator) computer language coding system created by IBM programmer John Backus in 1957 and tested on the IBM 704.

Backus said FORTRAN took what had previously required 1,000 machine statement instructions to be written in only 47 statements, significantly increasing computer programmer productivity.

In 1961, NASA launched two crewed Mercury suborbital flights. IBM 7090 computers installed in NASA Ames Research Center assisted engineers and mission flight controllers by quickly performing thousands of calculations per second.

The 1965 NASA Gemini spacecraft’s 59-pound onboard digital guidance computer was manufactured by IBM. It used a 7.143-hertz processor clock and could execute more than 7,000 calculations per second.

In 1969, IBM’s computer reliability was credited with keeping Apollo 12 on its proper trajectory after a potentially catastrophic event.

On Nov. 14, 1969. About 37 seconds after the Apollo 12 Saturn V rocket left the launchpad from Cape Canaveral, two lightning bolts struck it, knocking out all of the command module’s onboard instrumentation systems and telemetry with Mission Control in Houston.

“What the hell was that?” shouted Apollo 12 command module pilot Richard Gordon after lightning struck the Saturn V rocket traveling at 6,000 mph.

Fortunately, two-way radio communications were still functioning between Mission Control and the command module spacecraft.

“I just lost the whole platform,” Apollo 12 mission commander Charles Conrad Jr. radioed Mission Control. “We had everything in the world drop out,” he added.

The static discharge from the lightning caused a voltage outage, knocking out most of the Apollo 12 command module control systems, including the disconnection of its vital telemetry communications link with Mission Control.

Loud, overlapping voices could be heard in Mission Control as engineers and flight controllers worked on what course of action to take.

Fortunately, the Apollo 12 Saturn V rocket did not deviate from its planned trajectory. Instead, the IBM 60-pound Launch Vehicle Digital Computer (LVDC) housed inside the Instrument Unit section of the rocket’s third stage contained the required processing power to continue the Saturn V’s programmed course.

Meanwhile, Mission Control engineers saw strange data pattern readings on their control screens and desperately worked to find a solution.

NASA Mission flight controller and engineer John Aron recalled similar data patterns during simulation tests. He remembered it meant the Signal Conditioning Electronics were down.

“Flight, try SCE to AUX,” Aaron recommended to Mission Flight Director Gerry Griffin.

Griffin instructed the recommendation to be radioed to the astronauts in the command module.

One minute after the lightning strike, Mission Control radioed the astronauts in the Apollo 12 command module with the following:

“Apollo 12, Houston. Try SCE to Auxiliary. Over.”

There was a brief pause as the astronauts heard what they thought was the acronym “FCE” instead of “SCE.”

“Try FCE to Auxiliary. What the hell is that?” Conrad questioned Mission Control.

“SCE – SCE to Auxiliary,” Mission Control slowly repeated with emphasis.

Apollo 12 pilot astronaut Alan Bean was familiar with the SCE switch inside the command module. So, turning around in his seat, he flipped SCE to AUX, which restored and normalized the command module instrumentation data and telemetry transmissions.

Apollo 12 was able to complete its mission to the moon, thanks in significant part to the reliability of the IBM LVDC and, of course, Aaron’s “SCE to AUX.”

In 1962, science fiction writer Arthur C. Clarke witnessed a demonstration in Bell Labs where its scientists used the IBM 7094 computer to create a synthesized human voice singing the song “Daisy Bell (Bicycle Built for Two).”

This demonstration by the IBM computer inspired Clarke to write a much-remembered scene in the 1968 science fiction movie “2001: A Space Odyssey” featuring the somewhat sentient “Heuristically programmed ALgorithmic” computer known as the HAL 9000.

In the movie, the HAL 9000 computer is singing “Daisy Bell (Bicycle Built for Two)” while deactivating to inoperability as astronaut David Bowman removes its computing modules.

For the record, the HAL 9000 was not an IBM computer.

Thu, 04 Aug 2022 08:26:00 -0500 text/html http://www.herald-journal.com/archives/2022/columns/mo080522.html
Killexams : IBM 'continuing to hire' ahead of recession

Read more: Safeguard Global CTO: Tech talent remains highly sought after

But IBM has always carved its own path. For example, the Armonk, NY-based company doesn’t use the term “Great Resignation,” at least internally. Of course, that doesn’t mean the tech giant isn’t aware of the nationwide talent shortage and the highly competitive labor market that’s resulted.

“This is a time to ensure we re-engage our population,” Louissaint says. “By nature of my title, my goal is to continue to transform and pivot our company toward being more growth-minded, transforming it directly through leadership: leadership development, getting people in the right jobs and ensuring we have the right succession plans.”

Like many companies since the COVID-19 pandemic, IBM has relied upon its business resource groups – its label for employee resource groups (ERGs) – to maintain and even boost retention. Traditionally, ERGs consist of employees who volunteer their time and effort to foster an inclusive workplace. Due to their motivations, needs and the general nature of ERG work, employees who lead these groups are more likely to be Black, Indigenous and People of Color (BIPOC) and oftentimes women. ERGs are a way for underrepresented groups to band together to recruit more talent like them into their companies and make sure that talent feels supported and gets promoted.

“It’s a lot easier to leave a company where you’ve only interacted with colleagues through a screen,” Louissaint says. “Our diversity groups and communities have gotten a lot stronger, which builds commitment to the company and community to each other. We’ve found that through our communities, business resource groups, open conversations and by democratizing leadership by using virtual technologies like Slack, the company has become smaller and the interactions are a lot more personal.”

A major contributor to the Great Resignation has been the push for workers to return to the office. While Apple and Google have ruffled feathers with requesting employees back for at least a couple days a week, Tesla went one step further by demanding employees head to the office five days a week, as if the COVID-19 pandemic never happened.

Ahead of the game, IBM was one of the first major tech firms to embrace remote work, with as much as 40% of its workforce at home during the 2000s. A shift came in 2017, but since the pandemic, only 20% of the company’s U.S. employees are in the office for three days a week or more, according to IBM CEO Arvind Krishna. In June, Krishna added that he doesn’t think the balance will ever get back to more than 60% of workers in the office.

“We’ve always been defined by flexibility, even prior to the pandemic that’s what we were known for and what differentiated us,” Louissaint says. “Continuing to double down on flexibility has been a value to us and to our people.”

IBM has also been defined by its eye toward the future, particularly when it comes to workforce development. Over the past decade, the tech giant has partnered with educational institutions, non-governmental organizations and other companies to discover and nurture talent from untapped pools and alternative channels. Last year, the company vowed to train 30 million individuals on technical skills by 2030.

“Our people crave learning and are highly curious,” Louissaint says, adding that the average IBM employee consumes about 88 hours of learning through its platform each year. Nearly all (95%) employees are on the platform in any given quarter.

“We’ve been building a strong learning environment where employees can build new skills and drive toward new jobs and experiences,” he says. “We also find that the individuals who consume the most learning are more likely to get promoted. It’s 30% more likely for a super learner to be promoted or switch jobs, so the incentive is continued growth and opportunity for advancement.”

Wed, 03 Aug 2022 16:00:00 -0500 en text/html https://www.hcamag.com/us/specialization/learning-development/ibm-continuing-to-hire-ahead-of-recession/415538
Killexams : Cybersecurity - what’s the real cost? Ask IBM
(Pixabay)

Cybersecurity has always been a concern for every type of organization. Even in normal times, a major breach is more than just the data economy’s equivalent of a ram-raid on Fort Knox; it has knock-on effects on trust, reputation, confidence, and the viability of some technologies. This is what IBM calls the “haunting effect”.

A successful attack breeds more, of course, both on the same organization again, and on others in similar businesses, or in those that use the same compromised systems. The unspoken effect of this is rising costs for everyone, as all enterprises are forced to spend money and time on checking if they have been affected too.

But in our new world of COVID-19, disrupted economies, climate change, remote working, soaring inflation, and looming recession, all such effects are all amplified. Throw in a war that’s hammering on Europe’s door (with political echoes across the Middle East and Asia) and it’s a wonder any of us can get out of bed in the morning.

So, what are the real costs of a successful cyberattack – not just hacks, viruses, and Trojans, but also phishing, ransomware, and concerted campaigns against supply chains and code repositories?

According to IBM’s latest annual survey, breach costs have risen by an unlucky 13% over the past two years, as attackers, which include hostile states, have probed the systemic and operational weaknesses exposed by the pandemic.

The global average cost of a data breach has reached an all-time high of $4.35 million – at least, among the 550 organizations surveyed by the Ponemon Institute for IBM Security (over a year from March 2021). Indeed, IBM goes so far as to claim that breaches may be contributing to the rising costs of goods and services. The survey states:

Sixty percent of studied organizations raised their product or services prices due to the breach, when the cost of goods is already soaring worldwide amid inflation and supply chain issues.

Incidents are also “haunting” organizations, says the company, with 83% having experienced more than one data breach, and with 50% of costs occurring more than a year after the successful attack.

Cloud maturity is a key factor, adds the report:

Forty-three percent of studied organizations are in the early stages [of cloud adoption] or have not started applying security practices across their cloud environments, observing over $660,000 in higher breach costs, on average, than studied organizations with mature security across their cloud environments.

Forty-five percent of respondents run a hybrid cloud infrastructure. This leads to lower average breach costs than among those operating a public- or private-cloud model: $3.8 million versus $5.02 million (public) and $4.24 million (private).

That said, those are still significant costs, and may suggest that complexity is what deters attackers, rather than having a single target to hit. Nonetheless, hybrid cloud adopters are able to identify and contain data breaches 15 days faster on average, says the report.

However, with 277 days being the average time lag – an extraordinary figure – the real lesson may be that today’s enterprise systems are adept at hiding security breaches, which may appear as normal network traffic. Forty-five percent of breaches occurred in the cloud, says the report, so it is clearly imperative to get on top of security in that domain.

IBM then makes the following bold claim :

Participating organizations fully deploying security AI and automation incurred $3.05 million less on average in breach costs compared to studied organizations that have not deployed the technology – the biggest cost saver observed in the study.

Whether this finding will stand for long as attackers explore new ways to breach automated and/or AI-based systems – and perhaps automate attacks of their own invisibly – remains to be seen. Compromised digital employee, anyone?

Global systems at risk

But perhaps the most telling finding is that cybersecurity has a political dimension – beyond the obvious one of Russian, Chinese, North Korean, or Iranian state incursions, of course.

Concerns over critical infrastructure and global supply chains are rising, with threat actors seeking to disrupt global systems that include financial services, industrial, transportation, and healthcare companies, among others.

A year ago in the US, the Biden administration issued an Executive Order on cybersecurity that focused on the urgent need for zero-trust systems. Despite this, only 21% of critical infrastructure organizations have so far adopted a zero-trust security model, according to the report. It states:

Almost 80% of the critical infrastructure organizations studied don’t adopt zero-trust strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared to those that do. All while 28% of breaches among these organizations were ransomware or destructive attacks.

Add to that, 17% of breaches at critical infrastructure organizations were caused due to a business partner being initially compromised, highlighting the security risks that over-trusting environments pose.

That aside, one of the big stories over the past couple of years has been the rise of ransomware: malicious code that locks up data, enterprise systems, or individual computers, forcing users to pay a ransom to (they hope) retrieve their systems or data.

But according to IBM, there are no obvious winners or losers in this insidious practice. The report adds:

Businesses that paid threat actors’ ransom demands saw $610,000 less in average breach costs compared to those that chose not to pay – not including the ransom amount paid.

However, when accounting for the average ransom payment – which according to Sophos reached $812,000 in 2021 – businesses that opt to pay the ransom could net higher total costs, all while inadvertently funding future ransomware attacks.”

The persistence of ransomware is fuelled by what IBM calls the “industrialization of cybercrime”.

The risk profile is also changing. Ransomware attack times show a massive drop of 94% over the past three years, from over two months to just under four days. Good news? Not at all, says the report, as the attacks may be higher impact, with more immediate consequences (such as destroyed data, or private data being made public on hacker forums).

My take

The key lesson in cybersecurity today is that all of us are both upstream and downstream from partners, suppliers, and customers in today’s extended enterprises. We are also at the mercy of reused but compromised code from trusted repositories, and even sometimes from hardware that has been compromised at source.

So, what is the answer? Businesses should ensure that their incident responses are tested rigorously and frequently in advance – along with using red-, blue-, or purple-team approaches (thinking like a hacker, a defender, or both).

Regrettably, IBM says that 37% of organizations that have IR plans in place fail to test them regularly. To paraphrase Spinal Tap, you can’t code for stupid.

Wed, 27 Jul 2022 20:21:00 -0500 BRAINSUM en text/html https://diginomica.com/cybersecurity-whats-real-cost-ask-ibm
Killexams : IBM report reveals data breach cost average of $4m globally

A survey by IBM Security has revealed that data breaches are higher-impact and costlier than ever before, with the global average reaching an all-time high of $4.35 million.

Conducted on behalf of IBM by the Poneman Institute, the 2022 Cost of a Data Breach Report was based on in-depth analysis of real-world data breaches experienced by 550 organisations globally between March 2021 and March 2022.

The report showed breach costs rising by nearly 13 per cent over the past two years, with the results suggesting the incidents may also be contributing to the rising costs of goods and services, with 60 per cent of surveyed organisations reportedly having raised their product or services prices due to a breach.

The survey also showed that 83 per cent of those studied had experienced more than one data breach in their lifetime. Another factor shown to be rising over time was the aftereffects of breaches lingering long after they occur, with 50 per cent of breach costs incurred more than a year after a breach.

Other key findings of the report revealed that ransomware victims who decide to pay threat actors’ random demands only incurred $610,000 less in breach costs than those who chose not to pay.

The study shows that 80 per cent of critical infrastructure organisations studied don’t adopt ‘zero trust’ strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared with those who do.

Immature cloud security practices in clouds – with 43 per cent reporting only being in the early stages of applying security measures to the cloud - resulted in $660,000 higher breach costs on average than organisations with mature security across their cloud environments.

Commenting on the report, Charles Henderson, global head of IBM security X-force, said: “This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked.”

Thu, 04 Aug 2022 22:12:00 -0500 en text/html https://www.fstech.co.uk/fst/IBM_report_reveals_data_breach_cost_average_of_4m_globally.php
M6040-419 exam dump and training guide direct download
Training Exams List