Simply study and remember these C9560-654 braindumps questions

Each and every candidate that go through the C9560-654 exam feel that, C9560-654 test questions are altogether different from the C9560-654 digital book and course books. We have viewed this issue in a serious way. We have gathered the most refreshed, most recent, and legitimate C9560-654 questions and answers and made an information base to assist up-and-comers with breezing through tests with excellent grades.

Exam Code: C9560-654 Practice test 2022 by team
C9560-654 IBM Tivoli Application Dependency Discovery Manager V7.2.1.3 Implementation

Exam Title : IBM Certified Deployment Professional - Tivoli Application Dependency Discovery Manager V7.2.1.3
Exam ID : C9560-654
Exam Duration : 90 mins
Questions in test : 68
Passing Score : 66%
Official Training : Tivoli Application Dependency Discovery Manager
Exam Center : Pearson VUE
Real Questions : IBM Tivoli Application Dependency Discovery Manager Implementation Real Questions
VCE practice test : IBM C9560-654 Certification VCE Practice Test

- Given a list of initial business applications to be discovered, the list of servers the application runs on, and the approved project plan, determine the order, plan and methodology for discovery so that discovery scopes have been defined for discovering the servers and its components that the initial business applications run on.
- Given a customer's environment, design the architecture so that the initial architecture plan for the solution has been created.
- Given a customer's environment, determine the best estimate of the number of TADDM components required so that the initial architecture plan for the deployment has been created.
- Given the customer's network configuration / diagrams and TADDM solution architecture, define firewall requirements so that list of ports to be opened on firewalls is delivered.
- Given the list of technologies to be discovered and list of sensors that require credentials to run correctly, gather proper information from TADDM documentation, communicate to customer, implement and refine so that requirements for credentials are communicated and implemented in the environment
- Given SME(s) and/or documentation, determine and document the customer's existing environments that will be discovered and managed with TADDM so that an implementation plan is developed.
- Given server with operating System installed for TADDM Installation, verify OS configuration and required S/W or Libraries are installed on that server so that the server is available for TADDM installation.
- Given a list of computer systems/components/applications that will be discovered, create a list of TADDM sensors that will be run so that a list of sensors and any required credentials has been created.
- Given that TADDM defines the different levels of discovery, analyze data needs and explain to customers the different levels and the options that are available so that the customer understands the 3 levels of discovery available within TADDM.
- Given the customer's data source requirements, analyze the requirements to determine if Discovery Library Adapters (DLA) are necessary and a means to import and export data to/from TADDM so that the customer's environment has been evaluated for DLA requirements.

- Given a system ready for database creation, prepare the database for IBM Tivoli Application Dependency Discovery Manager (TADDM) using different operating system so that database preparation is completed.
- Given the TADDM architecture document and installation binaries, install TADDM and set up the environment so that TADDM is installed, initially configured, and up and running.
- Given a running TADDM environment, database backed up and binaries backed up, Install TADDM fix pack so that TADDM fix pack installation is completed successfully.
- Given an installed TADDM host, validate and complete the post installation configuration so that TADDM server will be ready for configuring the discovery.
- Given the proper access administrator authority for a designated anchor server, configure anchor servers so that a service account has been created that will be used by the anchor server for discovery behind the firewall.

- Given target systems prepared for Anchor installation, list of anchors to be deployed with SSH servers installed, configure Anchors so that Anchor objects are created in IBM Tivoli Application Dependency Discovery Manager (TADDM) GUI and tested to be running.
- Given a running TADDM environment and a list of Windows Gateways to be configured, perform SSH server installation and configuration on target systems, create Windows Gateway objects in TADDM GUI so that Windows Gateways are configured properly and ready to be used during discovery.
- Given customer's need for the 3 levels of discover and a running TADDM system, create different levels of discovery profiles and enable the options that are available so that Configuration Items can be discovered.
- Given the list of extended attributes to be discovered by TADDM for chosen operating system, create and configure extended attributes by creating them, configuring templates and running a discovery so that extended attribute successfully populated with desired value.
- Given Target server(s), a working TADDM server and access to the Discovery Management Console, using either the UI or command line add a Scope Set with the machine(s) configured so that the machine(s) can be discovered.
- Given a list of servers / images and components/applications to be discovered, define the user privileges required for TADDM scans so that users with proper credentials are deployed to targets to be discovered and TADDM Access Lists are configured properly.
- Given the user name and password for access to machines, a working TADDM server, and access to the Discovery Management Console, navigate to the Discovery Management Console and add an access credential so that Configuration Items can be discovered.
- Given the need to debug Sensors more efficiently and effectively, set the SplitSensor option so that a clear view of each sensor is available.
- Given a custom design file, configure the customBusiness Intelligence and Reporting Tool ( BIRT) report so that custom BIRT report is generated for the collected data.
- Given a running TADDM environment and a database connection, generate a report so that a report is generated.
- Given the list of Windows systems that will be discovered with a non-admin account and the account name to be created, configure the windows target for non-admin discovery so that Windows system is discovered Level2 using non-admin account.
- Given the discovery schedule, configure TADDM scheduling so that discovery starts at a given time.
- Given an installed TADDM server, create the snapshots to take a point-in-time copy of basic information about computer systems, discovery events and server applications running on computer systems so that the snapshots are available to take a point-in-time copy of basic information
- Given a list of locations to be configured in TADDM, make necessary decisions and modifications in and anchor properties files so that discovered components have location tag attribute set properly.
- Given the information required to create a custom server template, a running TADDM, server and access to the Discovery Management Console, define custom server templates and build custom server templates from the UI so that the custom applications have been discovered properly.
- Given that TADDM and IBM Tivoli Monitoring (ITM) are installed, prepare ITM and TADDM environment so that discovery can be performed by using ITM agent.
- Given a business application and the list of servers/components/applications that compose the business application, create the appropriate application descriptor files and deploy them to the appropriate directories on the servers where discovery will be done so that discoveries have been run and it has been Checked that the business application has been built correctly.
- Given a list of users to be created and Admin access to Data Management Portal, create users so that TADDM user IDs have been configured for use.

- Given the need to categorize application components into business applications and services, create business application and services using the Data Management Portal that combines large collections of individual components into logical groups.
- Given a properly installed and operating IBM Tivoli Application Dependency Discovery Manager (TADDM) system and access credentials, execute API query so that XML data is extracted and available via STDOUT (Standard Output).
- Given the timespan for keeping the historical changes of attributes values, perform a cleanup of the database by running proper SQLs so that TADDM database is cleaned up from old change history data
- Given TADDM is up and running, max size of the file system to be used for logs, configure settings for logs maintenance and optionally implement removing sensor logs mechanism so that log files are maintained automatically.
- Given a customer's need for TADDM, prepare, install, and execute TADDM so that the customer's environment is fully discovered and validated.
- Given a list of servers/images and components/applications to be discovered, manage and obtain credentials so that credentials are created on the requested servers/images/components/applications.
- Given an existing scope and discovery profile to use, run a TADDM discovery using the Discovery Management Console and the API so that a discovery has been run by using the API and Discovery Management Console.
- Given a valid IDML file for loading, a running TADDM server and access to the server running TADDM, use the loadidml script to populate the TADDM database so that the information contained in the IDML file is loaded into the TADDM database.
- Given a TADDM Server, Discovery Management Console and User ID and password, review the types of status messages that occur during discovery and viewing history so that the status messages are understood.
- Given an installed TADDM server, access to the server running TADDM and the new database user password, update the file and encrypt the database access passwords so that the file is updated with the new encrypted passwords.
- Given an installed TADDM server and terminal access to the server running TADDM, start and stop the TADDM server processes so that the TADDM server has been stopped or started.
- Given the TADDM server is running, run the analytics from the Data Management Portal so that the necessary information is available to be analyzed.
- Given a working TADDM server, and access to the Data Management Portal, navigate to the Data Management Portal and create Configuration Items, and dependencies so that a new Configuration Item (CI) or dependency is created.
- Given that the present roles do not suffice for an access requirement to TADDM, create a new role with unique permissions to fulfill the request so that a new TADDM role is configured for use.
- Given supported hardware, operating system and running database, command-line access to the TADDM server, the password for the root, create backups and perform restores so that a backup and restore are available when needed.

Problem Determination
- Given a running IBM Tivoli Application Dependency Discovery Manager (TADDM) environment and database connectivity, tune the Discovery parameter so that the discovery parameters are tuned appropriately.
- Given list of Java Virtual Machines to set extended logging for, edit file and set proper values for logging level so that logging level is set to desired value and logs contain desired information for problem diagnosis
- Given that a problem has occurred within the TADDM environment and error levels need to be modified to ensure the correct messaging is captured for remediation enable/disable advanced logging for TADDM so that the environment is set for debug mode so that when problem occurring the correct messaging is captured and resolved.
- Given the need to diagnose a problem within the TADDM environment, utilize the support bin tools so that problem can be debugged.
- Given TADDM is installed and user IDs are created and problems occur with Discover, Topology, Discovery Admin, Proxy or Gigaspace processes, review the jvmarg settings in the collation.properites file and determine if more memory is required so that performance is enhanced and/or service is not interrupted.
- Given the need to define common parameters in the files, review the most common parameters that are located in the file so that the common parameters have been defined.
- Given a TADDM server, Identify points of failure regarding NMAP on L1 discoveries so that points of failure regarding NMAP on L1 discoveries are identified and corrected.
- Given an application sensor failure, conduct an L3 Scan and troubleshoot application sensor failures so that the problems are resolved for a successful L3 collection.

IBM Tivoli Application Dependency Discovery Manager V7.2.1.3 Implementation
IBM Implementation mission
Killexams : IBM Implementation mission - BingNews Search results Killexams : IBM Implementation mission - BingNews Killexams : The Rise Of Digital Twin Technology

Senior advisor to the ACIO and executive leadership at the IRS.

The ongoing global digital transformation is fueling innovation in all industries. One such innovation is called digital twin technology, which was originally invented 40 years ago. When the Apollo mission was developed, scientists at NASA created a digital twin of the mission Apollo and conducted experiments on the clone before the mission started. Digital twin technology is now becoming very popular in the manufacturing and healthcare industries.

Do you know that the densely populated city of Shanghai has its own fully deployed digital twin (virtual clone) covering more than 4,000 kilometers? This was created by mapping every physical device to a new virtual world and applying artificial intelligence, machine learning and IoT technologies to that map. Similarly, Singapore is bracing for a full deployment of its own digital twin. The McLaren sports car already has its own digital twin.

Companies like Siemens, Philips, IBM, Cisco, Bosch and Microsoft are already miles ahead in this technology, fueling the Fourth Industrial Revolution. The conglomeration of AI, IoT and data analytics predicts the future performance of a product even before the product’s final design is approved. Organizations can create a planned process using digital twin technology. With a digital twin, process failures can be analyzed ahead of production. Engineering teams can perform scenario-based testing to predict the failures, identify risks and apply mitigation in simulation labs.

Digital twins produce a digital thread that can then enable data flows and provide an integrated view of asset data. These digital threads are the key to the product life cycle and help optimize product life cycles. The simulation of a digital thread can identify gaps in operational efficiencies and produce a wealth of process improvement opportunities through the application of AI.

Another reason behind the overwhelming success of digital twin technology is its use in issue identification and minor product design corrections while products are in operations. For example, for a high-rise building, with a digital twin, we can identify minor structural issues and implement them in the virtual world before carrying them over to the real world, cutting down long testing cycles.

By the end of this decade, scientists may come up with a fully functional digital twin of a human being that can tremendously help in medical research. There may be a digital version of some of us walking around, and when needed, it can provide updates to our family or healthcare providers regarding any critical health conditions we may have. Some powerful use cases for the use of digital twin humans include drug testing and proactive injury prevention.

Organizations starting to think about implementing digital twin technology in product manufacturing should first look at the tremendous innovation done by leaders like Siemens and GE. There are hundreds of case studies published by these two organizations that are openly available on the market. The next step is to create a core research team and estimate the cost of implementing this technology with the right ROI justification for your business stakeholder. This technology is hard to implement, and it’s also hard to maintain. That’s why you should develop a long-term sustainable strategy for digital twin implementation.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Wed, 03 Aug 2022 02:00:00 -0500 Kiran Palla en text/html
Killexams : Five Ways To Convince Your Customers To Try Something New

Wendy Chen is the CEO of Omnistream, a retail automation company helping retailers bring joy to consumers

Every innovator, at some point, faces the same challenge. You’ve built a revolutionary mousetrap, but you need to convince people to actually take a chance on your product—and stop using whatever solution they’re currently using to keep the rodent population under control.

That’s a tough sell because, by definition, your new product is unproven. Even if you’ve been around a while and you have a clear record of success, and even if you can show how much ROI your product will generate on paper, customers quite reasonably worry about the potential for things to go wrong.

To drive things forward, it’s important to build your sales pipeline—and even your product itself—with your customers’ pain points in mind. Here are five ways to convince your customers to bet on innovation and take a chance on your product:

Understand The Friction

It isn’t enough to show your buyer that your product is better than the alternative. You need to understand and account for the friction that keeps them from wanting to make changes. That isn’t just conservatism—it’s a rational disinclination toward any sort of change.

Some industries, some companies and some product categories bring more inherent friction than others. It’s up to you to understand that and find ways to lubricate the wheels and create momentum for change.

Minimize The Risk

The biggest source of friction, of course, is the risk inherent in trying something new. If there’s a working product in place, then making any change brings a non-zero chance that things will stop working—and that usually ends with someone getting fired. Understandably, people in positions to make these decisions often prioritize minimizing risk rather than maximizing value, and it’s up to you to account for that fact.

One smart approach: Instead of trying to sell customers on a widespread rollout, offer to run a low-cost, low-risk pilot project. My company is a retail tech solutions vendor, and we often use pilot projects or small-scale tests with a handful of stores across one or two product categories to convince potential customers to try us out. We then measure their incremental growth and resulting store-level profitability having used our solutions against control stores.

Keep Costs Low

Nobody wants to spend money on unproven technology, and no matter how great your product, every customer will view it as unproven until they’ve seen it delivering consistent results for their specific use-case. Finding creative ways to keep costs low, especially during the early stages, is vital.

Some SaaS companies now use consumption-based pricing, rather than regular monthly subscriptions, to reassure customers they’ll only pay for what they use. Others, like my company, peg our price to the increased performance we deliver. It's important to do everything necessary to make sure your retail clients succeed, so they know they’re always coming out ahead.

It’s also important to ensure your product plays nicely with legacy infrastructure and is complementary to your existing investments: It doesn’t matter how great your product is if it requires your customer to completely rebuild their backend IT or POS systems. Simple integration into your existing core systems ensures a speedy execution. Another great option is to offer a modular offering, which allows customers to choose only the processes they want to ensure full integration into your entire existing supply chain, retail planning and forecasting systems.

Help Your Advocates Communicate Your Value

As the saying goes, nobody gets fired for buying IBM. Your goal during the pilot project is to develop advocates for your product—people at all levels, from end-user to the C suite—who are willing to stick their necks out and say your product is worth implementing more broadly.

To do that, you need to ensure you’re delivering at all levels of the organization: Change management support for the implementation team, a streamlined experience for users, real benefits (results) for their supervisors and clear metrics that document your product’s value and allow it to be easily communicated up the command chain.

Make Your Pilot Scalable

Once you’ve secured buy-in for your product, you need to be able to communicate a clear strategy for scaling up the pilot and delivering broader value. This needs to be baked into the DNA of your pilot: If you’ve focused on a handful of stores for one to two product categories, for instance, then make it easy to add a couple more stores or categories—or quickly scale up and add entire regions.

For bonus points, make your product more valuable as it scales. You’ve shown your product works across a couple of locations—but can you offer additional learnings and customer insights as you bring more locations into your network? You’ll also need to show willingness to customize your product in order to serve your customers’ unique needs and fringe cases and stay aligned with their own strategy for growth, so they’re motivated to lean into the relationship as they expand.

Enabling Innovation

We’re raised to view innovators as mavericks—people who think differently and change the world by the sheer force of their creativity and contrarianism. But the reality is that innovation is a team sport, and it’s only by convincing other people to join your mission that you’ll be able to win top-to-bottom buy-in and truly bring your product to scale. To succeed as B2B software innovators, we need to spend as much time thinking about how to turn our customers into innovators as we do on planning our own innovations.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Sun, 07 Aug 2022 22:00:00 -0500 Wendy Chen en text/html
Killexams : Quantinuum scales error correction to Strengthen fault-tolerant quantum computing

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Although quantum computing companies and researchers have made progress in scaling the number of physical qubits, this also tends to increase the rate of errors. A main concern in this area is that adding enough qubits together to solve significant problems may also lead to error-prone results. 

Researchers at Quantinuum report that have recently found a way to scale the number of qubits to increase the performance and reduce the error rate. This is no simple task because quantum computers have a higher volume of errors compared with classical computers. In addition, many error correction techniques that form a mainstay of classical computing, like a parity check, also introduce new errors in quantum computing. 

Quantinuum was formed by the merger of Cambridge Quantum Computing, a leading quantum software company, and the quantum hardware division of Honeywell. Cambridge Quantum Computing had been developing better quantum algorithms and ways to translate classical computer algorithms to work on quantum computers. Meanwhile, Honeywell had been pioneering a novel quantum computing ion trap architecture that allows qubits to connect more easily than other approaches.

Honeywell’s work allowed the team to transform 20 physical qubits into two more reliable logical qubits. Although this may seem like a step backward numerically speaking, it’s a tremendous step forward since these qubits can be added together. 

Researchers commonly refer to the current generation of quantum computers as part of the noisy intermediate scale quantum (NISQ) era. This work will ultimately pave the way to build fault-tolerant quantum computers that can scale to address significant problems.

Quantum twist on redundancy

Hardware errors in which a transistor spontaneously switches tend to be rare in modern semiconductor circuits, but in some cases — like running a safety-critical system exposed to radiation — engineers design error correction systems that combine three processors. A supervisory system compares the results. If an error occurs, the supervisory system can detect if the calculation does not match and can safely ignore it if it does not match the others. 

Quantum computer can introduce new problems. There are more kinds of errors that need to be corrected. A relatively simple parity check in classical computing can produce new errors in quantum computing.

Quantum computers can suffer from two kinds of errors:  bit flips and phase flips. In a bit flip error, the qubit flips the computational state incorrectly from zero to one and vice versa.  In a phase flip error, which does not occur in a classical computer, the phase of the qubit flips state. Previous theoretical research identified a way to correct both types of errors by constructing logical qubits. Last year, Quantinuum demonstrated a practical implementation of these techniques in a quantum computer using a 5 qubit code. However, this still increased errors as the number of qubits was scaled. 

In the new technique, called a color code, the researchers found a way to combine seven logical qubits into one logical qubit in coordination with 2-3  ancillary qubits used for probing. They implemented this new color code  technique on top of Quantinuum’s latest computer with 20 physical qubits to create two reliable logical qubits.  These new logical qubits can be efficiently scaled in a way that increases fault tolerance that was not practical with the physical qubits or even the 5 qubit approach.

Russelll Stutz, director of commercial hardware at Quantinuum, told VentureBeat this means that as they add more qubits, the probability of getting failures that ruin the entire computation decreases with a modest rise in the number of physical qubits. 

One remaining challenge is the quantum error correction cycle. The simple act of probing a qubit for errors can introduce new ones. Stutz said future work will explore ways to ensure they are not adding more errors than they remove with an error correction code. 

Connection required

Researchers have thought about how different quantum error correction approaches might work. Although the Quantinuum approach isn’t delivering as many raw physical qubits as other approaches, these are fully connected, which opens opportunities to leverage these innovative algorithms. In many quantum architectures, each qubit is only connected to a few neighbors.

“We are now testing quantum error correction code concepts dreamed up in the late 1990s and can implement in these real systems for the first time,” Stutz said. “It is an exciting time for learning about quantum error correction.” 

Stutz says this research is a significant milestone on the long road to fault-tolerant quantum computing. He feels that researchers will be able to solve many practical problems once they scale systems to 50 logical qubits with lower error rates than physical qubits. 

“It is laying the groundwork,” Stutz said. “You cannot really solve an industry-relevant problem with the number of logical qubits we are dealing with right now. We are essentially building really good components that will be used in a larger computation.”

Read more: IBM touts ‘Quantum Serverless’ as it eyes path to 4,000-plus qubit

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Mon, 08 Aug 2022 15:30:00 -0500 George Lawton en-US text/html
Killexams : IT industry grapples with complexity and security as Kubernetes adoption grows

The information technology industry has a complexity problem, and it is leading to deeper conversations among thought leaders around how to solve it.

The days of building applications on one server using a monolithic architecture have transformed into developing numerous microservices, packaging them into containers, and orchestrating the entire production using Kubernetes in a distributed cloud.

It’s no wonder that in global survey results released by Pegasystems Inc. barely two months ago, three out of four employee respondents felt job complexity had continued to rise and they were overloaded with information, systems and processes. Nearly half singled out digital transformation as the cause.

Kubernetes has proven a great tool for driving modern IT infrastructure, yet it has also figured prominently in the design of overly complex systems. One of the tech industry’s most prominent thought leaders called attention to this issue in a latest interview during DockerCon 2022, with virtual coverage produced by theCUBE, SiliconANGLE Media’s livestreaming studio.

“The world is going to collapse on its own complexity,” noted development leader Kelsey Hightower said during a conversation with Docker Inc. Chief Executive Scott Johnston. “The number of teams I meet, and I won’t mention any names, say, ‘Kelsey, we’re going to show you our Kubernetes stack.’ Twenty minutes later, they are at piece number 275. Who’s going to maintain all of this? Why are you doing this?”

Move toward common interfaces

Hightower’s anecdote highlights the need for standardized tools within the Kubernetes developer community. As Kubernetes has matured, it has become a platform for building other platforms, and platform-as-a-service offerings such as CloudRun, OpenShift and Knative have enabled a great deal of operational management tasks for developers.

There has also been a move to create common interfaces within Kubernetes to enable adoption without requiring open-source community-wide agreement on implementation. These include Container Networking Interface, Container Runtime Interface and Custom Resource Definitions.

Despite the IT industry’s growing complexity, Hightower sees hope in the Kubernetes community’s ability to centralize around standardized tools.

“These contracts matter, and these standards are going to put complexity where it belongs,” Hightower said. “If you are a developer, yes, the world is complex, but it doesn’t mean that you have to learn all of that complexity. When you standardize you get to level the whole field up and move much faster. It’s got to happen.”

The challenge for many organizations is how to balance the requirements of running a data-driven business with the complexity that brings. While some enterprises have merely dipped their toes into the container deployment waters, others have jumped headfirst into the pool.

A Canonical Ltd. cloud operations report found that Kubernetes users commonly deploy two to five production clusters. The European Organization for Nuclear Research, known as CERN, is the largest particle physics laboratory in the world and runs approximately 210 clusters. Then there is Mercedes-Benz, which has pursued another model entirely. The global automaker gave a presentation at KubeCon Europe in May that described how it uses more than 900 Kubernetes clusters.

The German automaker was an early adopter of Kubernetes. It began experimenting with the container orchestration tool in 2015, only a year after Google LLC open-sourced the technology.

“We started small as a grassroots initiative,” Andrea Berg, manager of corporate communications at Mercedes-Benz North America Corp., said in comments provided to SiliconANGLE. “It was driven in a ‘from developers to developers’ mindset and became more and more successful. We helped change the mindset of our company towards cloud-native and free and open-source software.”

Mercedes-Benz Tech Innovation, the company’s subsidiary for overseeing company-wide technology, has grown its structure to support hundreds of application development teams. As the number of Kubernetes clusters grew, the company realized that it would need a tool to manage them. It turned to Cluster API on OpenStack, a Kubernetes-native way to manage clusters among different cloud providers.

The company also created a culture where developers would soon realize that as applications were completed, there would be no more ticket desks to run them. Automation tools would drive DevOps.

“We realized that a single shared cluster would not fit our needs,” Jens Erat, DevOps engineer at Mercedes-Benz, said during a KubeCon Europe presentation. “We had engineers with in-depth knowledge; we understood the tech and decided to create our own solution instead. You build it, you run it. There’s an API for that.”

Knative eases developer burden

The API path toward an easier approach for deploying Kubernetes in the enterprise received a boost in March when the Cloud Native Computing Foundation announced that it would accept Knative as an incubating project. Originally developed by Google, Knative is an open-source, Kubernetes-based platform for managing serverless and event-driven applications.

The concept behind severless technology is to bundle applications as functions, upload them to a platform, and have them automatically scaled and executed. Developers only have to deploy apps. They don’t have to worry about where they run or how a given network is handling them.

A number of major companies have a vested interest in seeing Knative become more widely used. Red Hat, IBM, VMware and TriggerMesh have worked with Google to Strengthen Knative’s ability to manage serverless and event-driven applications on top of the Kubernetes platform.

“We see a lot of interest,” Roland Huss, senior principal software engineer at Red Hat Inc., said in an interview with SiliconANGLE. “We heard before the move that many contributors were not looking into Knative because of not being part of a mutual foundation. We are still ramping up and really hope for more contributors.”

The road for Knative has been a bumpy one, which has exposed growing pains as the Kubernetes community has expanded. Google took some heat for previously deciding not to donate Knative, before announcing a change of heart in December.

Ahmet Alp Balkan, one of Google’s engineers who worked on different aspects of Knative prior to last year, penned a blog post that expressed concerns around how the serverless solution had been positioned within the developer community. Among Balkan’s concerns was the description of Knative as a building block for Kubernetes itself.

“I think we overestimated how many people on the planet want to build a Heroku-like platform-as-a-service layer on top of Knative,” Balkan wrote. “Our messaging revolved around these ‘platform engineers’ or operators who could take Knative and build their UI/CLI experience on top. This was the target audience for those building blocks Knative had to offer. However, this turned out to be a very small and niche audience.”

Need for greater security

Thought leaders in the Kubernetes community have also become more attuned to security for the container orchestration tool. Feedback from the user base has validated this focus.

In May, Red Hat published the results of a survey that found that 93% of respondents had experienced at least one security incident in their container or Kubernetes environments. More than half of respondents had delayed or slowed application deployment over security concerns. The report’s findings received additional credence in late June. Scanning tools used by the cybersecurity research firm Cyble Inc. uncovered 900,000 Kubernetes instances that were exposed online.

“Real DevSecOps requires breaking down silos between developers, operations and security, including network security teams,” said Kirsten Newcomer, director of cloud and DevSecOps strategy at Red Hat, during a KubeCon Europe interview with SiliconANGLE. “The Kubernetes paradigm requires involvement. It forces involvement of developers in things like network policy for things like the software-defined network layer.”

There is also an expanding list of open-source tools for hardening Kubernetes environments. KubeLinter is a static analysis tool that can identify misconfigurations in Kubernetes deployments. Security-Enhanced Linux, a default security feature implemented in Red Hat OpenShift, provides policy-based access control. And the CNCF project Falco acts as a form of security camera for containers, detecting unusual behavior or configuration changes in real time. Falco has reportedly been downloaded more than 45 million times.

With Kubernetes, it is easy to get caught up in metrics surrounding enterprise adoption, security and application deployments. Yet behind the increased dependence on containers can be found an important element that gets lost in the noise. Whether Kubernetes is complex or not, a lot of people now depend on this technology to work.

Near the end of his dialogue this spring with Docker’s Johnston, Hightower related a story about his previous work for a financial firm that processed shopping transactions for families needing government assistance. At one point, the transaction processor crashed and Hightower joined his colleagues in a “war room” as programmers followed a laborious set of steps to reboot the system and get the platform working.

“We’re just looking at this screen, some things were turning green and some were turning red, and the things turning red were the result of payments being declined,” Hightower recalled. “Each of those items turning red on the dashboard represented someone with their whole family trying to buy groceries. Their only option was to leave all of their groceries there. What we have to do as a community is remind ourselves that it’s people over technology, always.”

Image: distelAPPArath/Pixabay

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Mon, 08 Aug 2022 09:27:00 -0500 en-US text/html
Killexams : Here’s how Kyndryl is helping organisations reshape their digital strategies No result found, try new keyword!With a new leader on board, Kyndryl aims to continue to invest in new capabilities and expand its partner ecosystem to help customers expedite their transformation journeys The Middle East’s ... Tue, 19 Jul 2022 01:04:06 -0500 en-ae text/html Killexams : Kyndryl Holdings, Inc. (KD) CEO Martin Schroeter on Q1 2023 Results - Earnings Call Transcript

Kyndryl Holdings, Inc. (NYSE:KD) Q1 2023 Earnings Conference Call August 4, 2022 8:30 AM ET

Company Participants

Lori Chaitman - Global Head of Investor Relations

Martin Schroeter - Chairman and Chief Executive Officer

David Wyshner - Chief Financial Officer

Conference Call Participants

Tien-Tsin Huang - JPMorgan


Good morning, and welcome to the Kyndryl First Quarter 2023 Earnings Conference Call. [Operator Instructions] Please be advised that today's call is being recorded.

I will now turn the call over to Lori Chaitman, Global Head of Investor Relations at Kyndryl. You may begin.

Lori Chaitman

Good morning, everyone, and welcome to Kyndryl's Earnings Call for the Quarter Ended June 30, 2022, the first quarter of our new fiscal year.

Before we begin, I'd like to remind everyone that our remarks today will include forward-looking statements. These statements are subject to risk factors that may cause our actual results to differ materially from those expressed or implied, and these statements speak only to our expectations as of today. For more details on some of these risks, please see the Risk Factors section of our annual report on Form 10-K for the year ended December 31, 2021.

Kyndryl does not update forward-looking statements and disclaims any obligation to do so. In today's remarks, we will also refer to certain non-GAAP financial measures. Corresponding GAAP measures and a reconciliation of non-GAAP measures to GAAP measures for historical periods are provided in the presentation materials for today's event, which are available on our website at

With me here today are Kyndryl's Chairman and Chief Executive Officer, Martin Schroeter; and Kyndryl's Chief Financial Officer, David Wyshner. Following our prepared remarks, we will hold a Q&A session.

I'd now like to turn it over to our Chairman and CEO, Martin Schroeter. Martin?

Martin Schroeter

Thank you, Lori, and thanks to each of you for joining us today. I am enthusiastic about our momentum and proud of what the team has accomplished over the last 3 months. On today's call, we'll share Kyndryl's quarterly results and update you on our progress. I'll discuss our strategy and how we're executing on our 3 As initiatives, alliances, advanced delivery and accounts, which are driving us toward profitable growth. Then David will provide more detail on our first quarter financial results, reaffirm our fiscal 2023 outlook and linked our latest progress to our financial goals.

It's been nine months since Kyndryl became an independent publicly traded company, and I am just as excited today about the opportunity ahead as I was on day 1. As you can imagine, there's never a no moment post-spin. There's plenty of work to do to transition internal processes, build the new culture and seize market opportunities. For those of you who are new to the Kyndryl story, prior to our spin-off last November, we operated largely as a captive services provider, focused on supporting the products and technologies that IBM offered to its customers. Today, we are the world's largest IT infrastructure services company designing, managing and modernizing complex mission-critical systems at scale for some of the world's largest organizations.

I'm proud of how quickly we're charting a new course to better serve our customers through our new alliances with a range of top-tier technology providers and enhancements of our services delivery driven by upskilling and automation fueled by data, IP and best practices. Our new freedom of action has given us the opportunity to be part of a much larger and growing ecosystem that really matters to our customers, expanding our addressable market from about $240 billion to $415 billion and growing. By 2024, this IT services market is expected to grow to about $510 billion.

Our expanded collaborations with leading technology providers are making us more relevant to our customers and allowing us as their long-standing trusted IT partner to support and accelerate our customers' digital journeys in cloud, security, data and intelligent automation with a multi-vendor strategy. Equally important with our independence, we can now invest in our business to create new capabilities deliver them at scale by gaining certifications and credentials for our already skilled technologists and thereby, grow our share of wallet with our existing customers.

Through our six practices, we can now meet needs that our customers have been asking us to meet for years in areas that we were previously prevented from serving. We're solidifying our position as a leading global provider of IT infrastructure services. We continue to generate twice as much infrastructure services revenue as anyone else and are uniquely focused on this sector of the market.

Our customers trust us to manage their most critical systems and we do it with the highest level of quality. I am really proud of our delivery teams. They continue to produce top-tier Net Promoter Scores generally north of plus 50. We continue to meet more than 99.7% of our service level agreement threshold with the June quarter being another quarter of above-target performance. We're pleased to have been named a leader in Gartner's Magic Quadrant for Managed Mobility Services and to be recognized as 1 of only 4 industry engineering certified and integrators in Gartner's latest report on 4G and 5G networking. Our NPS and SLA metrics, along with a growing list of external accolades highlight the world-class nature of our offerings. There are significant opportunities in front of us, and we understand that the macroeconomic environment is on many people's minds right now.

At Kyndryl, we run mission-critical IT systems, the hearts and lungs of our customers' operations, including global banking organizations, airline reservation systems, mobile networks and industrial supply chains. The essential nature of our business provides us some natural insulation to macro factors. In addition, our 3 As initiatives gives us a substantial opportunity that are specific to us and independent of the broader economy. Executing on these initiatives will deliver the benefits we need to strengthen our overall business performance and unlock substantial value for our customers, our employees and our stockholders alike.

A key enabler of our strategy has been the rapid build-out of our technology alliances and capabilities. Between November and March, we signed new collaborations with all 3 cloud hyperscalers, Amazon Web Services, Google Cloud and Microsoft Azure as well as many other leading technology companies. Since year-end, we've increased our cloud-related certifications by 36%, bringing our total to nearly 22,000 and giving us more capabilities to deliver cloud services.

This quarter, we've expanded or established new partnerships with Cisco, Five9, NetApp, Oracle, Red Hat, SAP and Veritas, continuing the theme of Kyndryl aligning with other top-tier technology providers now that we're independent. Customers are seeing how quickly we're leveraging these relationships, and they're now asking us to help them migrate a portion of their workloads to the cloud manage their explosive growth in data, integrate legacy and new technologies from multiple partners and address their urgent need for cybersecurity and resiliency. It is remarkable to see how fast our relationships are expanding.

One example of this is a global bank, where we have a nearly 20-year business relationship. We run mission-critical systems and the infrastructure behind their systems of record. Our relationship began in the early 2000s with traditional data center outsourcing for 1 of their divisions, including mainframe services work. The scope of our work has expanded over time across geographies and divisions. Most recently though, we not only extended our contract tied to legacy systems, we also added hyperscaler cloud work. And beyond that, the integration required to make sure the bank is running the right workload on the right platform. We're now supporting our customer across their architecture, data and application security, resiliency and systems innovation.

We're adding value as a trusted strategic partner that has the technology expertise to meet their complex evolving multifaceted needs. And at the same time, we're driving profitable revenue growth for our business as we increase our services revenue from this customer. This example is just 1 of the many that have been either executed already or are in the works across a range of industries, geographies and customer needs.

Back in February, we committed to sharing our progress on our 3 As initiatives. As a reminder, we provided targets of at least $1 billion in signings tied to hyperscaler alliances this fiscal year, $200 million in annualized cost savings from advanced delivery by year-end and $200 million of annualized pretax benefit from our accounts initiative. I am pleased with the progress we've made in such a short period of time on our 3 As, and we're on track to deliver on our fiscal 2023 milestones for each of these initiatives.

In our Alliance initiative, we generated $235 million of hyperscale-related signings in the quarter, putting us on track to achieve our $1 billion annual target. We're increasingly going to market with hyperscalers to seamlessly meet customers' needs. As a Microsoft Azure expert managed service provider and premier partner with both AWS and Google, we have immediate credibility as well as unique knowledge of our customers' existing infrastructure and workloads. And the pace at which we've built our team certifications, credentials and capabilities puts us in a position to provide the top-tier levels of service that customers have come to expect from Kyndryl. This is demonstrated by another strong quarter of signings growth in advisory and implementation services, which were up 27% in constant currency compared to last year.

We are using our new technology partners to grow our share of wallet with existing customers. In our advanced delivery initiative, we're investing in high intelligent automation and new ways of working, which frees up our people to be reskilled and redeployed to in-demand opportunities. This quarter, we expanded our proprietary delivery automation tooling to run more than 24 million automation events a month, more than double where we were a year ago. This significantly increases the level of service and resiliency we provide to our customers.

In the process, we freed up more than 1,900 of our people to serve new revenue streams and backfill attrition. When we free up people, we're increasing our productivity and the associated cost savings are running at an annualized rate of $100 million as of quarter end, equal to half our fiscal 2023 year-end objective. And at the same time, we're creating new opportunities for our people and reducing the extent to which we need to hire external talent.

In our accounts initiative, we are directly engaging with our customers where we're not generating an adequate return on the efforts and capital we're expending. The response from customers has been positive, and a number of them have already expanded our scope of delivery services, capitalizing on our broader ecosystem and new capabilities. In some cases, we're optimizing our cost basis through automation and greater standardization, while in other cases where we are near contract expiration we have the opportunity to discuss pricing or agree that Kyndryl will exit elements of work that are unprofitable for us.

Our engagement efforts so far resulted in a meaningful increase in the projected margins associated with these accounts reflecting our focus on signing profitable business. In the June quarter, we're already realizing pretax benefits at a rate of roughly $52 million a year, putting us on track to achieve our $200 million year-end run rate goal. The momentum we're demonstrating in our 3 As initiatives is driving us towards the strategic objectives we laid out last year, transforming Kyndryl to operate across a broader technology ecosystem, evolving our business mix, returning to revenue growth and expanding our margins. We're operating differently with the new mission and value proposition.

As we execute on our 3 As initiatives, we more forcefully to strengthen the margin profile of our business and progress toward our goal to return to profitable revenue growth in calendar year 2025, we will unlock substantial value. We'll continue building a culture that is flat, fast and focused on customer success, and we'll continue positioning Kyndryl to be the employer of choice and the partner of choice for customers and technology partners alike.

Now with that, I'll hand over to David to take you through our results and our outlook.

David Wyshner

Thanks, Martin, and hello, everyone. Today, I'd like to discuss our quarterly results, our balance sheet and liquidity and our outlook. Our financial results for the quarter ended June 30, our fiscal first quarter were in line with our expectations and position us to achieve the full year targets we laid out in May.

In the quarter, we generated revenue of $4.3 billion, which represents only a 2% decline in constant currency from our pro forma results a year ago. This includes 2 points of revenue growth we picked up from pass-through revenues related to our former parent. Because most of our revenue in any given quarter is the product of contracts signed over the prior several years, our revenue decline reflects the continuing effects of having been operated as a captive subsidiary of IBM prior to our spin off, not the future potential of our business.

Adjusted EBITDA in the quarter was $491 million. This represents an adjusted EBITDA margin of 11.4%. On a year-over-year basis, our adjusted EBITDA margin was down primarily due to the decline in revenue, a currency headwind of 60 basis points and a 50 basis point impact from some of our software licenses being treated as a subscription rather than a prepaid and amortized expense.

Notably, our gross margin increased 60 basis points sequentially from our March quarter to our June quarter. This is a better reflection of the operational progress we're making. Adjusted pretax loss was $50 million, which is sequentially consistent with our March quarter results and down year-over-year, primarily due to lower revenue and $48 million in currency headwinds. Among our geographic segments, we delivered year-over-year constant currency revenue growth in our Japan and strategic market segments and our strongest margins were in Japan and the United States. Changes in how various IBM-related costs are hitting each of our segments under our new commercial agreement with IBM complicate year-over-year margin comparisons by segment.

We address our customers' needs not only through our geographic operating segments, but also through our 6 global practices, cloud, applications data and AI, security and resiliency, network and edge, digital workplace and core enterprise. Our business mix is evolving to reflect demand with nearly 80% of our signings coming from cloud, apps data and AI, security and other growth areas and only 20% from core enterprise and zCloud. More importantly, our adjusted quarterly results were very much in line with our expectations.

Turning to our cash flow and balance sheet. Our adjusted free cash flow was negative $32 million in the quarter. We've provided a bridge from our Q1 adjusted pretax loss of $50 million to our free cash flow. Our gross capital expenditures in the quarter, including some CapEx due to our separation were $213 million, and we received $7 million of proceeds from asset dispositions. Working capital and other didn't contribute to cash flow in the quarter, but this is an opportunity for us for the year as a whole.

Our financial position remains strong. Our cash balance at June 30 was $1.9 billion, which reflects both the decline in the dollar value of our international cash and our use of $65 million for transaction-related payments. Our cash balance, combined with available debt capacity under committed borrowing facilities gave us $5 billion of liquidity at quarter end. Our debt maturities are well laddered from late 2024 to 2041. We had no borrowings outstanding under our revolving credit facility, and our net debt at quarter end was $1.3 billion. As a result, our net leverage sits well within our target range. We are rated investment grade by both Moody's and S&P and to add to that on Tuesday, which announced that they rate us as investment grade as well.

As we think about capital allocation, our top priorities are to maintain strong liquidity, remain investment grade and reinvest in our business. As we've said before, we view being investment grade as a commercial imperative given the importance of this to our customers. And because of the spin-related cash outlays we have in front of us, most of the free cash flow we'll generate this year is, in many ways, already spoken for.

As Martin mentioned, we're making rapid progress on our 3 As initiatives. Our momentum supports our expectation that over the medium term, our alliances initiative will drive signings, revenue and over time, roughly $200 million in annual pretax income. Our advanced delivery initiative will drive cost savings equating over time to roughly $600 million in annual pretax income and our accounts initiative will drive annual pretax income of $800 million. We're also pursuing growth in advisory and implementation services and among our global practices, which is incremental to the benefits coming from our 3 As initiative, and we see opportunities to control expenses throughout our business.

We expect that these efforts over time will contribute roughly $400 million in annual pretax income. Sometimes investors ask us what the market doesn't fully appreciate about the Kyndryl story? Here's 1 item I'd like to highlight from a financial perspective. We're a company that generated $134 million in pro forma adjusted pretax income last year, which has tangible plans to drive $2 billion of contribution to our annual pretax income over the medium term. The magnitude of the earnings growth opportunity we're tackling is a big deal and will be a foundational source of value creation for Kyndryl. I hope that margins update on our progress on these initiatives gives you confidence in our eagerness and ability to seize this enormous opportunity.

In light of the progress we're making on our key initiatives and in our business generally, we're reaffirming the fiscal 2023 earnings guidance we provided in May, and are updating our revenue forecast solely to reflect movements in exchange rates. In particular, we continue to expect to drive double-digit constant currency growth in signings in fiscal '23 compared to calendar year 2021. Consistent with the outlook we shared in May, we continue to expect our revenue to decline 3% to 4% in constant currency compared to the 12 months ended March 2022 and 4% to 6% in constant currency compared to fiscal 2021. With the dollar having continued to strengthen, this guidance now implies revenue of $16.3 billion to $16.5 billion this fiscal year.

Our outlook continues to be for our adjusted pretax margin to be in the range of 0% to 1%. This is consistent with our 2020 and 2021 pro forma results despite 120 basis points of expected currency headwinds this year, and we continue to expect our adjusted EBITDA margin to be 13% to 14% in fiscal 2023. As Martin mentioned, we believe demand for IT infrastructure services is largely insulated from broader macroeconomic trends. And to date, we have not seen any significant changes in our customers' approach.

Digital transformation and procuring talent, best practices and global scale continue to be important to large organizations. Let me comment on a few other macro factors that investors often ask about. First, while services demand feels solid, general price inflation is driving wage inflation. We've been doing well in terms of attracting and retaining the people we need, but higher prices and big headline inflation figures are impacting the salaries that existing employees and new hires expect.

We're also seeing inflationary pressures in other areas, especially in energy costs, but our contracts typically contain inflation protection mechanisms that mitigate the effects of rising costs. Second, currency movements are having an unusually pronounced impact this year, affecting not only the value of our foreign earnings, but also the dollar value of international cash and our margins since the compensation of our costs often differs from the currencies in which we source our revenues. Our hedging strategies and mitigating actions are helping us offset inflation and currency pressure. The currency alone is having a $200 million negative impact on our projected pretax earnings growth this year.

From a cash flow perspective, we continue to target about $750 million of gross capital expenditures and $700 million of net capital expenditures compared to about $900 million of depreciation expense. As a reminder, there is some seasonality in our revenues and margins with the October to December quarter typically being the strongest. While our results in our September quarter should be broadly similar to our June quarter, we see our full year margins being higher than our Q1 margins because of the favorable December quarter seasonality and the ramping of benefits from our 3 As initiatives.

Over the medium term, we remain committed to returning to revenue growth by calendar 2025, delivering margin expansion and driving free cash flow growth. We have a solid game plan to drive our progress, and this game plan starts with the steps we've already taken to expand our technology partnerships and with the meaningful initiatives we're implementing this year. Separately, we've gotten a number of questions, comments and [WOWS] from investors about 1 particular slide we published in May. This slide that provides a breakdown between our margin-challenged focus accounts in the rest of our business.

As this slide highlighted, our aggregate results masked the fact that within Kyndryl we have a strong $10 billion business, which we refer to as a blueprint for how we want to operate. This blueprint consists of accounts that represent about 60% of our revenue, generate average gross margins north of 20% and reflect our ability to get paid appropriately for the mission-critical services we provide. This blueprint is most of what we do and a source of stockholder value hiding in plain sight. And the reason that this value is underappreciated is our other roughly $8 billion of focused accounts revenue. This revenue stream generates virtually no gross margin and after SG&A expenses is losing money.

Our accounts initiative is all about the opportunity to make our focus accounts look more like the majority blueprint of our business over time by addressing elements of our customer relationships, that generate substandard margins. Over time, if we close even half of the gross margin gap between our focus accounts and our blueprint accounts, will generate the $800 million in incremental earnings that we've targeted from these accounts. That's why our accounts initiative is a major priority for us.

As Martin highlighted, in the June quarter, pretax margins associated with new signings tied to our focus accounts were up meaningfully. Since the beginning of the year, the overall pretax margin of our signings has been in the mid- to -high single digits. What that means is that of our P&L for the next few quarters reflected only our recently signed deals we'd be operating at mid- to high single-digit adjusted pretax margins, not the 0% to 1% margin generated largely by our pre-spin legacy signings.

In fact, even though our signings were down year-over-year in the June quarter when measured based on revenue, the gross profit we expect to generate over the next year from our June quarter signings is up year-over-year and its gross profit and then pretax profit that we're most focused on.

In closing, as an independent company, we're solidifying our position as a cost-effective gold standard provider of essential IT services. We're advancing towards the fiscal 2023 earnings targets we laid out in May. We're also executing on the strategies and initiatives that will drive longer-term progress, future growth and stronger earnings in our business. I'm particularly enthusiastic about our strong progress on our 3 As initiatives and the margins our latest signings will generate. Compared to our P&L, our tangible progress in these areas better exemplifies our potential are zeal to transform our business in our drive to create stockholder value.

With that, let me turn things back to Martin.

Martin Schroeter

Thanks, David. Before we turn to Q&A, let me remind you why we're so enthusiastic about Kyndryl's future. As an independent company, we are seizing our now larger market opportunity, bringing incremental and differentiated value to customers and focusing on driving profitable growth. We're committed to investing in our business, and we'll continue extending relationships with our ecosystem partners and customers. We are a trusted partner with tremendous expertise, experience and scale. And as technology continues to evolve, our customers look to Kyndryl to keep them operating efficiently and ahead of the technology curve.

Our 3 As initiatives will deliver substantial benefits. We have the financial flexibility to execute our growth strategy to invest in our people and to create a winning culture, a culture that will create significant value for our employees, our customers and our stockholders.

With that, David and I look forward to your questions.

Question-and-Answer Session


[Operator Instructions] We will take our first question from Tien-Tsin Huang from JPMorgan.

Tien-Tsin Huang

Okay. Great. Appreciate the enthusiasm, definitely came through on the call. I wanted to ask, I suppose, on signings, if that's okay. I'm curious about sort of visibility there and timing of revenue conversion, et cetera. Have you observed any changes? And I know you talked about double-digit signings growth looking ahead. So hence, the visibility question?

Martin Schroeter

Tien-Tsin, and thanks for joining the call. Look, a few things I'd say, first, obviously, our confidence in growing signings double digit this year stems from the pipeline that we're looking at. And we've got a terrific pipeline. We see it in the parts of our business where we're really focused, such as our A&IS business, which grew quite well this quarter as it did the prior quarter, such as the progress we're making with our hyperscale alliance partners. So we feel great about the pipeline but as you also know, we're really focused on the margin profile of these.

And as David noted, we -- the gross profit dollars, for instance, in the signings from just this most latest quarter, the gross profit dollars in the next year also grow within that signings pool. So while the overall signings for that short period, the 90 days, we're down, the gross profit dollars still provide us growth for the next 12 months, which again is our focus. So we feel really good about the pipeline. We feel really good about the teams executing in the areas that of our biggest focus, and we feel really good about the profit profile of what we're signing.

Now having said all that, look, when you're focused -- when one is focused on the quality of what you're signing and when one is really focused on making sure we get the right things into the backlog, that can elongate deal cycles that can elongate discussions with our customers. And look, we're okay because we want to get to the right signings -- the right signings profile, which we did in the most latest quarter, we did in the quarter prior to that.

So we see a great pipeline of the kinds of quality deals and the kinds of quality revenue streams to go into the backlog as evidenced again by the gross profit over the next year or as evidenced again by the margin profile. And David commented, I did as well in the prepared remarks, we both commented on the pretax margin profile of what's going into the backlog. So we feel good about the growth we see and -- probably more importantly, we feel really good about the quality and the profit profile of what's going in.

David Wyshner

And two things I just add related to the signings number. The June number -- the June quarter was a tough comp for us, we knew that going in because both of our two largest deals in calendar year 2021 fell in the June quarter, and those totaled more than $900 million. That created a tough comp for us. And obviously, we don't have that issue going forward. And then the second issue is that the December quarter is traditionally our biggest signings quarter. And as a result, how the second half of this calendar year plays out, particularly the December quarter ends up being a big driver of how we're going to get to double-digit signings growth for fiscal 2023.

Tien-Tsin Huang

I did have one, if you don't mind. I just want to ask on the gross margin since you mentioned it, we always like to look at gross margin as a proxy for contract execution pricing, labor costs, et cetera. So obviously, it sounds like that's doing well. There wasn't any unusual items there. But what about on the capital intensity side as well. Any change to consider there, especially as we think about cash flow conversion for the rest of the year?

David Wyshner

Yes. I think we continue to see the business becoming less capital intensive. Our CapEx is underrunning depreciation, and we expect that to be the case probably even a bit more so than it was in the June quarter as we look out over the remainder of the year. In addition, I think the amount of cash we end up outlining for capitalized software and transition cost, startup cost is probably going to underrun our amortization as well this year, which should be helpful to free cash flow.

So again, as we move to more advisory work and strengthen the margin profile of the business that we're signing, we see less capital intensity as part of that and that should be helpful to free cash flow, not only in fiscal 2023, but also over the longer term.


We'll go next to Jamie Friedman from Susquehanna Financial Group.

Unidentified Analyst

This is Spencer on for Jamie. Congratulations on the results. It seems that year is already tracking ahead of plan in some key metrics. Is the guidance just conservative or are there other considerations we should be looking at?

David Wyshner

I think the -- I think we feel very good about the progress that we're making on a number of fronts, particularly the strategic fronts, the 3 As and the margin at which we're signing up business. And when you look at something like advanced delivery where we've already achieved half of our full year target for the benefits that we expect to generate, it's a time that we're making good progress.

I'm hesitant to characterize the guidance in 1 direction or another. But I would point out that while we're making really good progress on the strategic front and with the 3 As and with the partnerships that we have, we have also been facing currency headwinds and the amount of currency impact on our EBITDA and our pretax margin, we currently estimate is a bit more than we would have estimated 3 months ago because of the way exchange rates have moved over this period of time.

So when we're seeing progress on the strategic front in areas that we control some of the areas that are outside of our control, such as exchange rates are -- have been a little bit more of a challenge. So I really don't want to characterize the guidance one way or another.

Martin Schroeter

Once again, thanks, everyone, for joining us today. We're delighted with the significant progress we made this quarter, obviously, in our 3 As and then getting our business back to profitable growth. We remain very excited about the opportunity ahead. We do serve our customers' mission-critical needs with more capabilities than ever before. And quite frankly, the idiosyncratic nature of a lot of the opportunities we have to turn this business around and the progress we're making in those keep us energized and motivated to deliver. So thanks again for joining, and we'll talk to you after the next quarter.


This concludes today's Kyndryl quarterly earnings call and webcast. You may disconnect your line at this time, and have a wonderful day.

Sat, 06 Aug 2022 06:44:00 -0500 en text/html
Killexams : Bega on-boards SXiQ for Lion’s Dairy and Drinks tech migration
John Hanna (SXiQ)

John Hanna (SXiQ)

Credit: IBM

SXiQ, an IBM Company, has been called up to migrate beverage company Lion Dairy and Drinks business over to new owners Bega Cheese.

Lion Dairy and Drinks was acquired in late 2020 and brought the manufacturing, marketing, sales and distribution of a series of brands under new ownership, which included Big M, Dare, Pura, Dairy Farmers, Farmers Union, Masters, Yoplait, Juice Brothers and Daily Juice.

According to SXiQ, the acquisition necessitated a technology transition within a 12-month time frame, with Bega needing applications, data and processes to be moved into existing or expanded infrastructure.

Additionally, the deal was based on application and data separation, which was to leave core infrastructure with the seller.

As a result, SXiQ migrated the infrastructure and transition of 31 physical sites performing production, distribution and administration duties, all within the time frame.