Big Blue is a nickname used since the 1980s for the International Business Machines Corporation (IBM). The moniker may have arisen from the blue tint of its early computer displays, or from the deep blue color of its corporate logo.
Big Blue arose in the early 1980s in the popular and financial press as a nickname for IBM. The name has unclear specific origins, but is generally assumed to refer to the blue tint of the cases of its computers.
The nickname was embraced by IBM, which has been content with leaving its origins in obscurity and has named many of its projects in homage of the nickname. For example, Deep Blue, IBM’s chess-playing computer, challenged and ultimately defeated grandmaster Garry Kasparov in a controversial 1997 tournament.
The first known print reference to the Big Blue nickname appeared in the June 8, 1981, edition of Businessweek magazine, and is attributed to an anonymous IBM enthusiast.
“No company in the computer business inspires the loyalty that IBM does, and the company has accomplished this with its almost legendary customer service and support … As a result, it is not uncommon for customers to refuse to buy equipment not made by IBM, even though it is often cheaper. ‘I don't want to be saying I should have stuck with the “Big Blue,”’ says one IBM loyalist. ‘The nickname comes from the pervasiveness of IBM's blue computers.’”
Other speculators have also associated the Big Blue nickname with the company’s logo and its one-time dress code, as well as IBM’s historical association with blue-chip stocks.
IBM began in 1911 as the Computing-Tabulating-Recording Company (CTR) in Endicott, NY. CTR was a holding company created by Charles R. Flint that amalgamated three companies that together produced scales, punch-card data processors, employee time clocks, and meat slicers. In 1924, CTR was renamed International Business Machines.
In the following century, IBM would go on to become one of the world’s top technological leaders, developing, inventing, and building hundreds of hardware and software information technologies. IBM is responsible for many inventions that quickly became commonplace, including the UPC barcode, the magnetic stripe card, the personal computer, the floppy disk, the hard disk drive, and the ATM.
IBM technologies were crucial to the implementation of U.S. government initiatives such as the launch of the Social Security Act in 1935 and many NASA missions, from the 1963 Mercury flight to the 1969 moon landing and beyond.
IBM holds the most U.S. patents of any business and, to date, IBM employees have been awarded many notable titles, including five Nobel Prizes and six Turing Awards.
One of the first multinational conglomerates to emerge in U.S. history, IBM maintains a multinational presence, operating in 175 countries worldwide and employing some 350,000 employees globally.
IBM has underperformed the broader S&P 500 index and Nasdaq-100 index. Significant divergence began in 1985 when the Nasdaq-100 and S&P 500 moved higher while IBM was mostly flat or lower until 1997. Since then it has continued to lose ground, especially when compared to the Nasdaq-100 index.
The underperformance in the stock price between 1985 and 2019 is underscored by the firm's financial performance. Between 2005 and 2012, net income generally rose, but at less than 12% per year on average. Between 2012 and 2017, net income fell by 65% over the time period, before recovering in 2018 and 2019. In 2019, though, net income was still about 43% lower than it was in 2012.
IBM is continuing its effort to democratize blockchain technology for developers. The company announced the availability of the IBM Blockchain Platform Starter Plan designed to supply developers, startups and enterprises the tools for building blockchain proofs-of-concept and an end-to-end developer experience.
“What do you get when you offer easy access to an enterprise blockchain test environment for three months?” Jerry Cuomo, VP of blockchain technology at IBM, wrote in a blog post. “More than 2,000 developers and tens of thousands of transaction blocks, all sprinting toward production readiness.”
RELATED CONTENT: Unlocking the blockchain potential
IBM has been focused on bringing the blockchain to enterprises for years. Earlier this year, the company announced IBM Blockchain Starter Services, Blockchain Acceleration Services and Blockchain Innovation Services.
The platform is powered by the open-source Hyperledger Fabric framework, and features a test environment, suite of education tools and modules, network provisioning, and $500 in credit for starting up a blockchain network. Hyperledger Fabric is an open-source blockchain framework implementation originally developed by Digital Asset and IBM.
According to the company, the Blockchain Platform was initially built for institutions working collectively towards mission-critical business goals. “And while Starter Plan was originally intended as an entry point for developers to test and deploy their first blockchain applications, users also now include larger enterprises creating full applications powered by dozens of smart contracts, eliminating many of the repetitive legacy processes that have traditionally slowed or prevented business success,” Cuomo explained.
Other features include: access to IBM Blockchain Platform Enterprise Plan capabilities, code samples available on GitHub, and Hyperledger Composer open-source technology.
“Starter Plan was introduced as a way for anyone to access the benefits of the IBM Blockchain Platform regardless of their level of blockchain understanding or production readiness. IBM has worked for several years to commercialize blockchain and harden the technology for the enterprise based on experience with hundreds clients across industries,” Cuomo wrote.
There's nothing worse than putting out a buggy software platform. End users are complaining, people are demanding refunds, and management is not happy. Oh, and you've got a lot of extra work to do to fix it.
Just look at the blowback video games like No Man's Sky and Cyberpunk 2077 have gotten in exact years for releases that critics considered buggy or incomplete. It's taken years of further development after its initial release for No Man's Sky to recover some of its reputation -- time will tell if Cyberpunk 2077 can do the same. Either way, it's not a great position to be in.
When developing new software, getting it right the first time is critical. That's why Rational Software Corp., a division of IBM, developed the Rational Unified Process (RUP) in the early 2000s, which remains popular today. RUP provides a simplified way for software development teams to create new products while reducing risk.
So, what exactly is RUP? This guide will break down how it can help with project execution and how to implement it.
The Rational Unified Process model is an iterative software development procedure that works by dividing the product development process into four distinct phases:
The purpose of breaking it down this way is to help companies better organize development by identifying each phase to increase the efficiency of executing tasks. Other businesses sometimes implement the RUP project management process as a development best practice.
As noted, there are four project phases of RUP, each identifying a specific step in the development of a product.
The development process begins with the idea for the project, which is known as the inception. The team determines the cost-benefit of this idea and maps out necessary resources, such as technology, assets, funding, manpower, and more.
The primary purpose of this phase is to make the business case for creating the software. The team will look at financial forecasts, as well as create a basic project plan to map out what it would look like to execute the project and generally what it would take to do so. A risk assessment would also factor into the discussion.
During this phase, the project manager may opt to kill the project if it doesn't look worth the company's time before any resources are expended on product development.
What’s happening: The team is creating a justification for the existence of this software project. It’s trying to tell management, “This new software will bring value to the company and the risks appear relatively small in comparison at first glance -- as a result, please let us start planning this out in more detail.”
If the software project passes the “smell” test -- i.e., the company thinks that on first pass the project benefits appear to outweigh the risks -- the elaboration phase is next. In this phase, the team dives deeper into the details of software development and leaves no stone unturned to ensure there are no showstoppers.
The team should map out resources in more detail and create a software development architecture. It considers all potential applications and affiliated costs associated with the project.
What’s happening: During this phase, the project is starting to take shape. The team hasn’t started development yet, but it is laying the final groundwork to get going. The project may still be derailed in this phase, but only if the team uncovers problems not revealed during the inception phase.
With the project mapped out and resources identified, the team moves on to the construction phase and actually starts building the project. It executes tasks and accomplishes project milestones along the way, reporting back to stakeholders on the project’s process.
Thanks to specific resources and a detailed project architecture built in the previous phase, the team is prepared to execute the software and is better positioned to complete it on time and on budget.
What's happening: The team is creating a prototype of the software that can be reviewed and tested. This is the first phase that involves actually creating the product instead of just planning it.
The final phase is transition, which is when the software product is transitioned from development to production. At this point, all kinks are ironed out and the product is now ready for the end user instead of just developers.
This phase involves training end users, beta testing the system, evaluating product performance, and doing anything else required by the company before a software product is released.
During this phase, the management team may compare the end result to the original concept in the inception phase to see if the team met expectations or if the project went off track.
What's happening: The team is polishing the project and making sure it's ready for customers to use. Also, the software is now ready for a final evaluation.
RUP is similar to other project planning techniques, like alliance management, logical framework approach, project crashing, and agile unified process (a subset of RUP), but it is unique in how it specifically breaks down a project. Here are a few best practices to ensure your team implements RUP properly.
By keeping the RUP method iterative -- that is, you break down the project into those four specific and separate chunks -- you reduce the risk of creating bad software. You Boost testing and cut down on risk by allowing a project manager to have more control over the software development as a whole.
Rather than create one big, complicated architecture for the project, supply each component an architecture, which reduces the complexity of the project and leaves you less open to variability. This also gives you more flexibility and control during development.
Developing software using the RUP process is all about testing, testing, and more testing. RUP allows you to implement quality control at each stage of the project, and you must take advantage of that to ensure development is completed properly. This will help you detect defects, track them in a database, and assure the product works properly in subsequent testing before releasing to the end user.
Rigidity doesn’t work with product development, so use RUP’s structure to be flexible. Anticipate challenges and be open to change. Create space within each stage for developers to improvise and make adjustments on the fly. This gives them the opportunity to spot innovative ways of doing things and unleash their creative instincts, which results in a better software product.
If you’re overwhelmed with planning software development projects, you’re not alone. That’s why project management software is such big business these days. Software can help you implement the RUP process by breaking down your next development project.
Try a few software solutions out with your team and experiment with the RUP process with each of them. See if you can complete an entire project with one software solution and then supply another one a try. Once you settle on a solution that fits your team, it will make you much more effective at executing projects.
When a computer can figure out whether a movie trailer is going to positively affect an audience or not – it makes you wonder how close we are to computer generated predictions on everything else in life. The short answer, according to Michael Karasick, IBM’s VP and Research Director at Almaden Labs, is that IBM ’s Watson is already making them. Since conquering “Jeopardy” and Chess, Watson has been focused on predictive healthcare, customer service, investment advice and culinary pursuits. But they are not stopping there, IBM is allowing select customers to use “Watson as a service” and may soon open it up to developers to build Watson apps.
Yes, the Watson technology is still maturing, but I am convinced that within five years the Watson platform will learn faster and make better predictions with each new field it understands. That’s because, as Karasick told me, “If you train a system like Watson on domain A and domain B, then it knows how to make the equivalence between terminologies in different domains.” That means as Watson solves problems in chemistry; it can generate probable solutions in Physics and Metallurgy too.
Imagine how this might be applied to marketing. By using Watson as a service, a business could train Watson to understand its customers, then use predictive models to recognize new products or services that their customers will buy.
Predict new trends and shifting tastes
Watson is a voracious consumer of data, and it doesn’t forget anything. You can feed it data from credit cards, sales databases, social networks, location data, web pages and it can compile and categorize that information to make high probability predictions.
And most shockingly, Watson is well ahead of its competitors in sentiment analysis. According to Karasick, Watson can recognize irony and sarcasm - and properly apprehend the intended meaning. That means Watson can quickly analyze large sample sizes to determine whether a movie trailer, product offering or clothing line are going to work with consumers.
Analyze social conversations – generate leads
Most social listening solutions on the market today do an adequate job of giving the marketer signals and reports about their industry, competitors, partners and current customers. But it’s up to the marketer to analyze the information and take action.
As Watson has demonstrated in other domains, it can foreseeably predict what information is most important and make recommendations on how to act on it. For example, if it finds a cluster of people discussing problems that the marketer’s solution solves, Watson can automatically notify the sales team or take action on its own to educate the prospective customers.
Determine whether a new innovation will sell or not
Because Watson can learn from one domain of knowledge and make high probability predictions in another, it’s reasonable to assume that if a company wanted to understand whether a new innovation will sell or not, Watson could analyze a company’s current market and customer base to provide success probabilities.
We’re a long way off from a Watson with the taste of a Steve Jobs, but if it has enough understanding of the situation, it can produce insights that can supply companies a clearer picture of the opportunities and threats.
Computer calculated and automated growth hacking
If you’re a marketer and not familiar with growth hacking, please study up fast. Growth hackers focus on innovative A/B testing techniques to maximize conversions on emails, websites, social media, online content or just about any digital media available to them. It’s a low cost but more effective alternative to traditional media.
I can see how Watson could proactively and intelligently test, measure and optimize digital content, ads, website pages even a company’s product to efficiently maximize customer growth. Andy Johns of Greylock, formerly a growth hacker for Facebook , Twitter and Quora told me that Facebook conducted 6 hacks a day to maximize growth opportunities. I suspect Watson could easily handle 10 times that amount.
This clearly is the digital march of progress. Watson has the potential to eliminate ineffective marketing, Boost good marketing to great marketing, and to predict how to better spend marketing dollars in the future.
Put it all together and you’ve revolutionized marketing.
Database professionals are in high demand. If you already work as one, you probably know this. And if you are looking to become a database administrator, that high demand and the commensurate salary may be what is motivating you to make this career move.
How can you advance your career as a database administrator? By taking the courses on this list.
If you want to learn more about database administration to expand your knowledge and move up the ladder in this field, these courses can help you achieve that goal.
Udemy’s Oracle DBA 11g/12c – Database Administration for Junior DBA course can help you get a high-paying position as an Oracle Database Administrator.
Best of all, it can do it in just six weeks.
This database administrator course is a Udemy bestseller that is offered in eight languages. Over 29,000 students have taken it, giving it a 4.3-star rating. Once you complete it and become an Oracle DBA, you will be able to:
To take the intermediate-level course that includes 11 hours of on-demand video spanning 129 lectures, you should have basic knowledge of UNIX/LINUX commands and SQL.
The 70-462: SQL Server Database Administration (DBA) course from Udemy was initially designed to help beginner students ace the Microsoft 70-462 exam. Although that test has been officially withdrawn, you can still use this course to gain some practical experience with database administration in SQL Server.
Many employers seek SQL Server experience since it is one of the top database tools. Take the 70-462: SQL Server Database Administration (DBA) course, and you can gain valuable knowledge on the subject and supply your resume a nice boost.
Some of the skills you will learn in the 70-462 course include:
DBA knowledge is not needed to take the 10-hour course that spans 100 lectures, and you will not need to have SQL Server already installed on your computer. In terms of popularity, this is a Udemy bestseller with a 4.6-star rating and over 20,000 students.
Nearly 10,000 students have taken the MySQL Database Administration: Beginner SQL Database Design course on Udemy, making it a bestseller on the platform with a 4.6-star rating.
The course features 71 lectures that total seven hours in length and was created for those looking to gain practical, real-world business intelligence and analytics skills to eventually create and maintain databases.
What can you learn from taking the Beginner SQL Database Design course? Skills such as:
The requirements for taking this course are minimal. It can help to have a basic understanding of database fundamentals, and you will need to install MySQL Workbench and Community Server on your Mac or PC.
If you want to immerse yourself into the world of database administration and get a ton of bang for your buck, TechRepublic Academy’s Database Administration Super Bundle may be right up your alley.
It gives you nine courses and over 400 lessons equaling over 86 hours that can put you on the fast track to building databases and analyzing data like a pro. A sampling of the courses offered in this bundle include:
Here is another bundle for database administrators from TechRepublic Academy. With the Ultimate SQL Bootcamp, you get nine courses and 548 lessons to help you learn how to:
The Complete Oracle Master Class Bundle from TechRepublic Academy features 181 hours of content and 17 courses to help you build a six-figure career. This intermediate course includes certification and will supply you hands-on and practical training with Oracle database systems.
Some of the skills you will learn include:
Coursera’s Learn SQL Basics for Data Science Specialization course has nearly 7,000 reviews, giving it a 4.5-star rating. Offered by UC Davis, this specialization is geared towards beginners who lack coding experience that want to become fluent in SQL queries.
The specialization takes four months to complete at a five-hour weekly pace, and it is broken down into four courses:
Skills you can gain include:
Once finished, you will be able to analyze and explore data with SQL, write queries, conduct feature engineering, use SQL with unstructured data sets, and more.
IBM offers the Relational Database Administration (DBA) course on Coursera with a 4.5-star rating. Complete the beginner course that takes approximately 19 hours to finish, and it can count towards your learning in the IBM Data Warehouse Engineer Professional Certificate and IBM Data Engineering Professional Certificate programs.
Some of the skills you will learn in this DBA course include:
Offered by Oracle, the Autonomous Database Administration course from Coursera has a 4.5-star rating and takes 13 hours to complete. It is meant to help DBAs deploy and administer Autonomous databases. Finish it, and you will prepare yourself for the Oracle Autonomous Database Cloud Certification.
Some of the skills and knowledge you can learn from this course include:
Looking for more database administration and database programming courses? Check out our tutorial: Best Online Courses to Learn MySQL.
The O’Reilly Open Source Software Conference (OSCON) is taking place this week in Oregon, gathering together industry leaders to talk about open source, cloud native, data-driven solutions, AI capabilities and product management.
“OSCON has continued to be the catalyst for open source innovation for twenty years, providing organizations with the latest technological advances and guidance to successfully implement the technology in a way that makes sense for them,” said Rachel Roumeliotis, vice president of content strategy at O’Reilly and OSCON program chair. “To keep OSCON at the forefront of open source innovation for the next twenty years, we’ve shifted the program to focus more on software development with syllabus such as cloud-native technologies. While not all are open source, they allow software developers to thrive and stay ahead of these shifts.”
A number of companies are also taking OSCON as an opportunity to release new software and solutions. Announcements included:
IBM’s Data Asset eXchange (DAX)
DAX is an online hub designed to supply developers and data scientists a place to discover free and open datasets under open data licenses. The datasets will use the Linux Foundation’s Community Data License Agreement when possible, and integrate with IBM Cloud and AI services. IBM will also provide new datasets to the online hub regularly.
“For developers, DAX provides a trusted source for carefully curated open datasets for AI. These datasets are ready for use in enterprise AI applications, with related content such as tutorials to make getting started easier,” the company wrote in a post.
DAX joins IBM’s other initiatives to help data scientists and developers discover and access data. IBM Model Asset eXchange (MAX) is geared towards machine learning and deep learning models. The company’s Center for Open-Source Data and AI Technologies will work to make it easier to use DAX and MAX assets.
New open-source projects
IBM also announced a new open-source project designed for Kubernetes. Kabanero is meant to help developers build cloud-native apps. It features governance and compliance capabilities and the ability to architect, build, deploy and manage the lifecycle of a Kubernetes-based app, IBM explained.
“Kabanero takes the guesswork out of Kubernetes and DevOps. With Kabanero, you don’t need to spend time mastering DevOps practices and Kubernetes infrastructure syllabus like networking, ingress and security. Instead, Kabanero integrates the runtimes and frameworks that you already know and use (Node.js, Java, Swift) with a Kubernetes-native DevOps toolchain. Our pre-built deployments to Kubernetes and Knative (using Operators and Helm charts) are built on best practices. So, developers can spend more time developing scalable applications and less time understanding infrastructure,” Nate Ziemann, product manager at IBM, wrote in a post.
The company also announced Appsody, an open source project to help with cloud-native apps in containers; Codewind, an IDE integration for cloud-native development; and Razee, a project for multi-cluster continuous delivery tooling for Kubernetes.
“As companies modernize their infrastructure and adopt a hybrid cloud strategy, they’re increasingly turning to Kubernetes and containers. Choosing the right technology for building cloud-native apps and gaining the knowledge you need to effectively adopt Kubernetes is difficult. On top of that, enabling architects, developers, and operations to work together easily, while having their individual requirements met, is an additional challenge when moving to cloud,” Ziemann wrote.
WSO2 API Micrograteway 3.0 announced
WSO2 is introducing a new version of its WSO2 API MIcrogateway focused on creating, deploying and securing APIs within distributed microservices architectures. The latest release features developer-first runtime generation, runt-time service discovery, support for composing multiple microservices, support for transforming legacy API formats and separation of the WSO2 API Microgateway toolkit.
“API microgateways are a key part of building resilient, manageable microservices architectures,” said Paul Fremantle, WSO2 CTO and co-founder. “WSO2 API Microgateway 3.0 fits effectively into continuous development practices and has the proven scalability and robustness for mission-critical applications.”
Carbon Relay’s new AIOps platform
Red Sky Ops is a new open-source AIOps platform to help organizations with Kubernetes initiatives as well as deploy, scale and manage containerized apps. According to Carbon Relay, this will help DevOps teams manage hundreds of app variables and configurations. The solution uses machine learning to study, replicate and stress-test app environments as well as configure, schedule and allocate resources.
Carbon Relay has also announced it will be joining the Cloud Native Computing Foundation to better support the Kubernetes community and the use of cloud native technologies.
IBM Research’s Deep Search product uses natural language processing (NLP) to “ingest and analyze massive amounts of data—structured and unstructured.” Over the years, Deep Search has seen a wide range of scientific uses, from Covid-19 research to molecular synthesis. Now, IBM Research is streamlining the scientific applications of Deep Search by open-sourcing part of the product through the release of Deep Search for Scientific Discovery (DS4SD).
DS4SD includes specific segments of Deep Search aimed at document conversion and processing. First is the Deep Search Experience, a document conversion service that includes a drag-and-drop interface and interactive conversion to allow for quality checks. The second element of DS4SD is the Deep Search Toolkit, a Python package that allows users to “programmatically upload and convert documents in bulk” by pointing the toolkit to a folder whose contents will then be uploaded and converted from PDFs into “easily decipherable” JSON files. The toolkit integrates with existing services, and IBM Research is welcoming contributions to the open-source toolkit from the developer community.
IBM Research paints DS4SD as a boon for handling unstructured data (data not contained in a structured database). This data, IBM Research said, holds a “lot of value” for scientific research; by way of example, they cited IBM’s own Project Photoresist, which in 2020 used Deep Search to comb through more than 6,000 patents, documents, and material data sheets in the hunt for a new molecule. IBM Research says that Deep Search offers up to a 1,000× data ingestion speedup and up to a 100× data screening speedup compared to manual alternatives.
The launch of DS4SD follows the launch of GT4SD—IBM Research’s Generative Toolkit for Scientific Discovery—in March of this year. GT4SD is an open-source library to accelerate hypothesis generation for scientific discovery. Together, DS4SD and GT4SD constitute the first steps in what IBM Research is calling its Open Science Hub for Accelerated Discovery. IBM Research says more is yet to come, with “new capabilities, such as AI models and high quality data sources” to be made available through DS4SD in the future. Deep Search has also added “over 364 million” public documents (like patents and research papers) for users to leverage in their research—a big change from the previous “bring your own data” nature of the tool.
The Deep Search Toolkit is accessible here.
As we exited the isolation economy last year, we introduced supercloud as a term to describe something new that was happening in the world of cloud computing.
In this Breaking Analysis, we address the ten most frequently asked questions we get on supercloud. Today we’ll address the following frequently asked questions:
1. In an industry full of hype and buzzwords, why does anyone need a new term?
2. Aren’t hyperscalers building out superclouds? We’ll try to answer why the term supercloud connotes something different from a hyperscale cloud.
3. We’ll talk about the problems superclouds solve.
4. We’ll further define the critical aspects of a supercloud architecture.
5. We often get asked: Isn’t this just multicloud? Well, we don’t think so and we’ll explain why.
6. In an earlier episode we introduced the notion of superPaaS – well, isn’t a plain vanilla PaaS already a superPaaS? Again – we don’t think so and we’ll explain why.
7. Who will actually build (and who are the players currently building) superclouds?
8. What workloads and services will run on superclouds?
9. What are some examples of supercloud?
10. Finally, we’ll answer what you can expect next on supercloud from SiliconANGLE and theCUBE.
Late last year, ahead of Amazon Web Services Inc.’s re:Invent conference, we were inspired by a post from Jerry Chen called Castles in the Cloud. In that blog he introduced the idea that there were submarkets emerging in cloud that presented opportunities for investors and entrepreneurs, that the big cloud vendors weren’t going to suck all the value out of the industry. And so we introduced this notion of supercloud to describe what we saw as a value layer emerging above the hyperscalers’ “capex gift.”
It turns out that we weren’t the only ones using the term, as both Cornell and MIT have used the phrase in somewhat similar but different contexts.
The point is something new was happening in the AWS and other ecosystems. It was more than infrastructure as a service and platform as a service and wasn’t just software as a service running in the cloud.
It was a new architecture that integrates infrastructure, unique platform attributes and software to solve new problems that the cloud vendors in our view weren’t addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud.
In addition, we felt this trend pointed to structural change going on at the industry level that supercloud metaphorically was highlighting.
So that’s the background on why we felt a new catchphrase was warranted. Love it or hate it… it’s memorable.
To that last point about structural industry transformation: Andy Rappaport is sometimes credited with identifying the shift from the vertically integrated mainframe era to the horizontally fragmented personal computer- and microprocessor-based era in his Harvard Business Review article from 1991.
In fact, it was actually David Moschella, an International Data Corp. senior vice president at the time, who introduced the concept in 1987, a full four years before Rappaport’s article was published. Moschella, along with IDC’s head of research Will Zachmann, saw that it was clear Intel Corp., Microsoft Corp., Seagate Technology and other would replace the system vendors’ dominance.
In fact, Zachmann accurately predicted in the late 1980s the demise of IBM, well ahead of its epic downfall when the company lost approximately 75% of its value. At an IDC Briefing Session (now called Directions), Moschella put forth a graphic that looked similar to the first two concepts on the chart below.
We don’t have to review the shift from IBM as the epicenter of the industry to Wintel – that’s well-understood.
What isn’t as widely discussed is a structural concept Moschella put out in 2018 in his book “Seeing Digital,” which introduced the idea of the Matrix shown on the righthand side of this chart. Moschella posited that a new digital platform of services was emerging built on top of the internet, hyperscale clouds and other intelligent technologies that would define the next era of computing.
He used the term matrix because the conceptual depiction included horizontal technology rows, like the cloud… but for the first time included connected industry columns. Moschella pointed out that historically, industry verticals had a closed value chain or stack of research and development, production, distribution, etc., and that expertise in that specific vertical was critical to success. But now, because of digital and data, for the first time, companies were able to jump industries and compete using data. Amazon in content, payments and groceries… Apple in payments and content… and so forth. Data was now the unifying enabler and this marked a changing structure of the technology landscape.
Listen to David Moschella explain the Matrix and its implications on a new generation of leadership in tech.
So the term supercloud is meant to imply more than running in hyperscale clouds. Rather, it’s a new type of digital platform comprising a combination of multiple technologies – enabled by cloud scale – with new industry participants from financial services, healthcare, manufacturing, energy, media and virtually all industries. Think of it as kind of an extension of “every company is a software company.”
Basically, thanks to the cloud, every company in every industry now has the opportunity to build their own supercloud. We’ll come back to that.
Let’s address what’s different about superclouds relative to hyperscale clouds.
This one’s pretty straightforward and obvious. Hyperscale clouds are walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their cloud, so they’re meeting customers where their data lives with initiatives such Amazon Outposts and Azure Arc and Google Anthos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, costs and performance they can deliver. The more complex the environment, the more difficult to deliver on their promises and the less margin left for them to capture.
Will the hyperscalers get more serious about cross cloud services? Maybe, but they have plenty of work to do within their own clouds. And today at least they appear to be providing the tools that will enable others to build superclouds on top of their platforms. That said, we never say never when it comes to companies such as AWS. And for sure we see AWS delivering more integrated digital services such as Amazon Connect to solve problems in a specific domain, call centers in this case.
We’ve all seen the stats from IDC or Gartner or whomever that customers on average use more than one cloud. And we know these clouds operate in disconnected silos for the most part. That’s a problem because each cloud requires different skills. The development environment is different, as is the operating environment, with different APIs and primitives and management tools that are optimized for each respective hyperscale cloud. Their functions and value props don’t extend to their competitors’ clouds. Why would they?
As a result, there’s friction when moving between different clouds. It’s hard to share data, move work, secure and govern data, and enforce organizational policies and edicts across clouds.
Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations and share data safely irrespective of location.
Pretty straightforward, but nontrivial, which is why we often ask company chief executives and execs if stock buybacks and dividends will yield as much return as building out superclouds that solve really specific problems and create differentiable value for their firms.
Let’s dig in a bit more to the architectural aspects of supercloud. In other words… what are the salient attributes that define supercloud?
First, a supercloud runs a set of specific services, designed to solve a unique problem. Superclouds offer seamless, consumption-based services across multiple distributed clouds.
Supercloud leverages the underlying cloud-native tooling of a hyperscale cloud but it’s optimized for a specific objective that aligns with the problem it’s solving. For example, it may be optimized for cost or low latency or sharing data or governance or security or higher performance networking. But the point is, the collection of services delivered is focused on unique value that isn’t being delivered by the hyperscalers across clouds.
A supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud and using its own specific platform-as-a-service tooling, creates a common experience across clouds for developers and users. In other words, the superPaaS ensures that the developer and user experience is identical, irrespective of which cloud or location is running the workload.
And it does so in an efficient manner, meaning it has the metadata knowledge and management that can optimize for latency, bandwidth, recovery, data sovereignty or whatever unique value the supercloud is delivering for the specific use cases in the domain.
A supercloud comprises a superPaaS capability that allows ecosystem partners to add incremental value on top of the supercloud platform to fill gaps, accelerate features and innovate. A superPaaS can use open tooling but applies those development tools to create a unique and specific experience supporting the design objectives of the supercloud.
Supercloud services can be infrastructure-related, application services, data services, security services, users services, etc., designed and packaged to bring unique value to customers… again that the hyperscalers are not delivering across clouds or on-premises.
Finally, these attributes are highly automated where possible. Superclouds take a page from hyperscalers in terms of minimizing human intervention wherever possible, applying automation to the specific problem they’re solving.
What we’d say to that is: Perhaps, but not really. Call it multicloud 2.0 if you want to invoke a commonly used format. But as Dell’s Chuck Whitten proclaimed, multicloud by design is different than multicloud by default.
What he means is that, to date, multicloud has largely been a symptom of multivendor… or of M&A. And when you look at most so-called multicloud implementations, you see things like an on-prem stack wrapped in a container and hosted on a specific cloud.
Or increasingly a technology vendor has done the work of building a cloud-native version of its stack and running it on a specific cloud… but historically it has been a unique experience within each cloud with no connection between the cloud silos. And certainly not a common developer experience with metadata management across clouds.
Supercloud sets out to build incremental value across clouds and above hyperscale capex that goes beyond cloud compatibility within each cloud. So if you want to call it multicloud 2.0, that’s fine.
We choose to call it supercloud.
Well, we’d say no. That supercloud and its corresponding superPaaS layer gives the freedom to store, process, manage, secure and connect islands of data across a continuum with a common developer experience across clouds.
Importantly, the sets of services are designed to support the supercloud’s objectives – e.g., data sharing or data protection or storage and retrieval or cost optimization or ultra-low latency, etc. In other words, the services offered are specific to that supercloud and will vary by each offering. OpenShift, for example, can be used to construct a superPaaS but in and of itself isn’t a superPaaS. It’s generic.
The point is that a supercloud and its inherent superPaaS will be optimized to solve specific problems such as low latency for distributed databases or fast backup and recovery and ransomware protection — highly specific use cases that the supercloud is designed to solve for.
SaaS as well is a subset of supercloud. Most SaaS platforms either run in their own cloud or have bits and pieces running in public clouds (e.g. analytics). But the cross-cloud services are few and far between or often nonexistent. We believe SaaS vendors must evolve and adopt supercloud to offer distributed solutions across cloud platforms and stretching out to the near and far edge.
Another question we often get is: Who has a supercloud and who is building a supercloud? Who are the contenders?
Well, most companies that consider themselves cloud players will, we believe, be building superclouds. Above is a common Enterprise Technology Research graphic we like to show with Net Score or spending momentum on the Y axis and Overlap or pervasiveness in the ETR surveys on the X axis. This is from the April survey of well over 1,000 chief executive officers and information technology buyers. And we’ve randomly chosen a number of players we think are in the supercloud mix and we’ve included the hyperscalers because they are the enablers.
We’ve added some of those nontraditional industry players we see building superclouds such as Capital One, Goldman Sachs and Walmart, in deference to Moschella’s observation about verticals. This goes back to every company being a software company. And rather than pattern-matching an outdated SaaS model we see a new industry structure emerging where software and data and tools specific to an industry will lead the next wave of innovation via the buildout of intelligent digital platforms.
We’ve talked a lot about Snowflake Inc.’s Data Cloud as an example of supercloud, as well as the momentum of Databricks Inc. (not shown above). VMware Inc. is clearly going after cross-cloud services. Basically every large company we see is either pursuing supercloud initiatives or thinking about it. Dell Technologies Inc., for example, showed Project Alpine at Dell Technologies World – that’s a supercloud in development. Snowflake introducing a new app dev capability based on its SuperPaaS (our term, of course, it doesn’t use the phrase), MongoDB Inc., Couchbase Inc., Nutanix Inc., Veeam Software, CrowdStrike Holdings Inc., Okta Inc. and Zscaler Inc. Even the likes of Cisco Systems Inc. and Hewlett Packard Enterprise Co., in our view, will be building superclouds.
Although ironically, as an aside, Fidelma Russo, HPE’s chief technology officer, said on theCUBE she wasn’t a fan of cloaking mechanisms. But when we spoke to HPE’s head of storage services, Omer Asad, we felt his team is clearly headed in a direction that we would consider supercloud. It could be semantics or it could be that parts of HPE are in a better position to execute on supercloud. Storage is an obvious starting point. The same can be said of Dell.
Listen to Fidelma Russo explain her aversion to building a manager of managers.
And we’re seeing emerging companies like Aviatrix Systems Inc. (network performance), Starburst Data Inc. (self-service analytics for distributed data), Clumio Inc. (data protection – not supercloud today but working on it) and others building versions of superclouds that solve a specific problem for their customers. And we’ve spoken to independent software vendors such as Adobe Systems Inc., Automatic Data Processing LLC and UiPath Inc., which are all looking at new ways to go beyond the SaaS model and add value within cloud ecosystems, in particular building data services that are unique to their value proposition and will run across clouds.
So yeah – pretty much every tech vendor with any size or momentum and new industry players are coming out of hiding and competing… building superclouds. Many that look a lot like Moschella’s matrix with machine intelligence and artificial intelligence and blockchains and virtual reality and gaming… all enabled by the internet and hyperscale clouds.
It’s moving fast and it’s the future, in our opinion, so don’t get too caught up in the past or you’ll be left behind.
We’ve given many in the past, but let’s try to be a bit more specific. Below we cite a few and we’ll answer two questions in one section here: What workloads and services will run in superclouds and what are some examples?
Analytics. Snowflake is the furthest along with its data cloud in our view. It’s a supercloud optimized for data sharing, governance, query performance, security, ecosystem enablement and ultimately monetization. Snowflake is now bringing in new data types and open-source tooling and it ticks the attribute boxes on supercloud we laid out earlier.
Converged databases. Running transaction and analytics workloads. Take a look at what Couchbase is doing with Capella and how it’s enabling stretching the cloud to the edge with Arm-based platforms and optimizing for low latency across clouds and out to the edge.
Document database workloads. Look at MongoDB – a developer-friendly platform that with Atlas is moving to a supercloud model running document databases very efficiently. Accommodating analytic workloads and creating a common developer experience across clouds.
Data science workloads. For example, Databricks is bringing a common experience for data scientists and data engineers driving machine intelligence into applications and fixing the broken data lake with the emergence of the lakehouse.
General-purpose workloads. For example, VMware’s domain. Very clearly there’s a need to create a common operating environment across clouds and on-prem and out to the edge and VMware is hard at work on that — managing and moving workloads, balancing workloads and being able to recover very quickly across clouds.
Network routing. This is the primary focus of Aviatrix, building what we consider a supercloud and optimizing network performance and automating security across clouds.
Industry-specific workloads. For example, Capital One announcing its cost optimization platform for Snowflake – piggybacking on Snowflake’s supercloud. We believe it’s going to test that concept outside its own organization and expand across other clouds as Snowflake grows its business beyond AWS. Walmart Inc. is working with Microsoft to create an on-prem to Azure experience – yes, that counts. We’ve written about what Goldman is doing and you can bet dollars to donuts that Oracle Corp. will be building a supercloud in healthcare with its Cerner acquisition.
Supercloud is everywhere you look. Sorry, naysayers. It’s happening.
With all the industry buzz and debate about the future, John Furrier and the team at SiliconANGLE have decided to host an event on supercloud. We’re motivated and inspired to further the conversation. TheCUBE on Supercloud is coming.
On Aug. 9 out of our Palo Alto studios we’ll be running a live program on the topic. We’ve reached out to a number of industry participants — VMware, Snowflake, Confluent, Sky High Security, Hashicorp, Cloudflare and Red Hat — to get the perspective of technologists building superclouds.
And we’ve invited a number of vertical industry participants in financial services, healthcare and retail that we’re excited to have on along with analysts, thought leaders and investors.
We’ll have more details in the coming weeks, but for now if you’re interested please reach out to us with how you think you can advance the discussion and we’ll see if we can fit you in.
So mark your calendars and stay tuned for more information.
Thanks to Alex Myerson, who does the production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.
Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.
Email firstname.lastname@example.org, DM @dvellante on Twitter and comment on our LinkedIn posts.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at email@example.com.
Here’s the full video analysis:
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.
In the rush to build, test and deploy AI systems, businesses often lack the resources and time to fully validate their systems and ensure they're bug-free. In a 2018 report, Gartner predicted that 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. Even Big Tech companies aren't immune to the pitfalls — for one client, IBM ultimately failed to deliver an AI-powered cancer diagnostics system that wound up costing $62 million over 4 years.
Inspired by "bug bounty" programs, Jeong-Suh Choi and Soohyun Bae founded Bobidi, a platform aimed at helping companies validate their AI systems by exposing the systems to the global data science community. With Bobidi, Bae and Choi sought to build a product that lets customers connect AI systems with the bug-hunting community in a "secure" way, via an API.
The idea is to let developers test AI systems and biases — that is, the edge cases where the systems perform poorly — to reduce the time needed for validation, Choi explained in an email interview. Bae was previously a senior engineer at Google and led augmented reality mapping at Niantic, while Choi was a senior manager at eBay and headed the "people engineering" team at Facebook. The two met at a tech industry function about 10 years ago.
"By the time bias or flaws are revealed from the model, the damage is already irrevocable," Choi said. "For example, natural language processing algorithms [like OpenAI's GPT-3] are often found to be making problematic comments, or mis-responding to those comments, related to hate speech, discrimination, and insults. Using Bobidi, the community can 'pre-test' the algorithm and find those loopholes, which is actually very powerful as you can test the algorithm with a lot of people under certain conditions that represent social and political contexts that change constantly."
To test models, the Bobidi "community" of developers builds a validation dataset for a given system. As developers attempt to find loopholes in the system, customers get an analysis that includes patterns of false negatives and positives and the metadata associated with them (e.g., the number of edge cases).
Exposing sensitive systems and models to the outside world might supply some companies pause, but Choi asserts that Bobidi "auto-expires" models after a certain number of days so that they can't be reverse-engineered. Customers pay for service based on the number of "legit" attempts made by the community, which works out to a dollar ($0.99) per 10 attempts.
Choi notes that the amount of money developers can make through Bobidi — $10 to $20 per hour — is substantially above the minimum wage in many regions around the world. Assuming Choi's estimations are rooted in fact, Bobidi bucks the trend in the data science industry, which tends to pay data validators and labelers poorly. The annotators of the widely used ImageNet computer vision dataset made a median wage of $2 per hour, one study found, with only 4% making more than $7.25 per hour.
Pay structure aside, crowd-powered validation isn't a new idea. In 2017, the Computational Linguistics and Information Processing Laboratory at the University of Maryland launched a platform called Break It, Build It that let researchers submit models to users tasked with coming up with examples to defeat them. Elsewhere, Meta maintains a platform called Dynabench that has users "fool" models designed to analyze sentiment, answer questions, detect hate speech and more.
But Bae and Choi believe the "gamified" approach will help Bobidi stand out from the pack. While it's early days, the vendor claims to have customers in augmented reality and computer vision startups, including Seerslab, Deepixel and Gunsens.
The traction was enough to convince several investors to pledge money toward the venture. Today, Bobidi closed a $5.5 million seed round with participation from Y Combinator, We Ventures, Hyundai Motor Group, Scrum Ventures, New Product Experimentation (NPE) at Meta, Lotte Ventures, Atlas Pac Capital and several undisclosed angel investors.
Of note, Bobidi is among the first investments for NPE, which shifted gears last year from building consumer-facing apps to making seed-stage investments in AI-focused startups. When contacted for comment, head of NPE investments Sunita Parasuraman said via email: "We're thrilled to back the talented founders of Bobidi, who are helping companies better validate AI models with an innovative solution driven by people around the globe."
"Bobidi is a mashup between community and AI, a unique combination of expertise that we share," Choi added. "We believe that the era of big data is ending and we're about to enter the new era of quality data. It means we are moving from the era — where the focus was to build the best model given with the datasets — to the new era, where people are tasked to find the best dataset given with the model-complete opposite approach."
Choi said that the proceeds from the seed round will be put toward hiring — Bobidi currently has 12 employees — and building "customer insights experiences" and various "core machine learning technologies." The company hopes to triple the size of its team by the end of the year despite economic headwinds.
Housing Starts for June; earnings from Johnson & Johnson, Netflix
Stock futures nudged higher on Tuesday, as investors continued to focus on corporate earnings and weighed the risks of recession amid inflationary pressures.
The risk of recession continues to dominate investor attention. Facing the highest inflation in decades, the Federal Reserve is expected to move forward on a path of aggressive monetary policy to tame red-hot prices, including further interest-rate increases.
With little economic data between now and the Fed's next policy meeting--when the central bank is expected to decide on another supersize rate hike -- markets are likely to continue focusing on corporate news and earnings.
"As the back-and-forth in sentiment yesterday showed, there are still plenty of obstacles for investors to navigate over the coming days," said Jim Reid, a strategist at Deutsche Bank.
"Not just recession risk but also the ongoing threat of a Russian gas shut-off at the end of the week."
Earnings are due ahead of the market open from Johnson & Johnson, Halliburton and Lockheed Martin. Netflix will report after the market close. Some technology companies have already slowed hiring or cut jobs.
Read: Tech Rises Modestly After Apple's Plans to Slow Hiring, IBM's Disappointing Outlook
Overseas, most major stock markets followed Wall Street's poor late-in-the-day performance from Monday.
In Europe, the euro rallied and the yield on the 2-year German bund rose in reaction to a Reuters report that the European Central Bank may kick off its rate hike campaign with a 50-basis point increase.
The report cited two unnamed sources with direct knowledge of the discussion. The ECB previously has guided for a rise of a quarter-point at its Thursday meeting.
The U.S. housing market has reached a turning point with rapidly rising mortgage rates and reduced affordability, but the sector isn't expected to experience a contraction like the one seen during the global financial crisis, Wells Fargo said.
New and existing home sales, new construction and home prices are all expected to pull back from their exact highs as the economy falls into a recession at the start of 2023, it said.
"Housing rarely escapes unscathed from a soft landing or a recession. However, unlike previous downturns, homeowners' finances are healthier and mortgage underwriting has been more cautious, Wells Fargo said.
The dollar continued to weaken, extending Monday's fall due to the paring back of expectations that the Fed will raise interest rates by 100 basis points at its next meeting.
"The U.S. rate market is currently pricing in around 80bps of hikes," said MUFG Bank.
However, MUFG continues to favor a stronger dollar in the near-term as the Fed's focus remains on dampening upside inflation risks.
MUFG said it's premature to expect the Fed to pivot towards a loose policy stance even if evidence of a economic slowdown builds through the rest of 2022.
Oil prices ticked higher in Europe, rebounding from earlier losses, with the focus back on supply tightness and limited spare capacity, after Joe Biden's visit to Saudi Arabia didn't result in a concrete pledge from the Saudis to increase output.
"Oil prices may have peaked, but they certainly don't look like they're going materially lower from here unless we get a huge surprise from OPEC+," said OANDA.
Base metals were lower in Europe, with worries over housing demand in the U.S. and China growing.
The U.S. National Association of Home Builders index fell much more than expected in July, while protests in China over mortgage payments on construction projects have affected building projects.
"Rising cost pressures and interest rates will continue to dampen housing construction in our view," said Commonwealth Bank of Australia.
TODAY'S TOP HEADLINES
IBM Second-Quarter Earnings Advance on 9% Sales Growth
International Business Machines Corp. reported 9% sales growth in second-quarter results that also reflected some of the wider concerns tech investors confront as the sector kicks off its earnings season.
IBM on Monday said revenue for the April through June period reached $15.5 billion after Chief Executive Arvind Krishna vowed to reinject growth into the business. As part of that plan, IBM spun off some of its declining operations last year.
Twitter Says Elon Musk's Opposition to Expedited Trial Is a Tactical Delay
Twitter Inc. called Elon Musk's opposition to a speedy trial for its case against the billionaire a tactical delay and said his proposed timeline is "calculated to complicate and obfuscate."
The social-media company argued Monday in a legal filing in Delaware Chancery Court that the public dispute harms Twitter every day that Mr. Musk is in breach of their $44 billion agreement. Twitter reiterated that the court should set trial by mid-September, on an expedited schedule.
France to Pay $9.8Bln to Take EDF Private
The French government said Tuesday that it plans to pay about 9.7 billion euros ($9.84 billion) to take full control of Electricite de France SA, a step it says is needed to manage the transition away from fossil fuels at a time of energy crisis and the war in Ukraine.
Chinese Developer Modern Land Nets U.S. Court Approval for Debt Restructuring
A U.S. judge on Monday approved Chinese property developer Modern Land (China) Co.'s proposed reorganization plan in the Cayman Islands under U.S. law, including the cancellation of roughly $1.4 billion in dollar-denominated debts.
The decision pushes back against the thrust of a May court ruling from Hong Kong justices seeking to limit the international scope of rulings by U.S. bankruptcy courts.
Celsius Defends Decision to Halt Withdrawals at Debut Bankruptcy Hearing
Celsius Network LLC tried to ease customers' anger over its freeze on account withdrawals, but indicated it doesn't intend to quickly release their funds as the cryptocurrency lender aims to weather the downturn in digital currencies and craft a repayment plan.
Celsius lawyers used the company's debut appearance in bankruptcy court Monday to defend its decision to halt withdrawals last month, saying that was necessary to safeguard customers' financial interests as users fled and crypto assets sold off.
Yellen Calls for Trade Overhaul to Diversify From China
SEOUL-Treasury Secretary Janet Yellen called for a reorientation of the world's trading practices in the wake of Russia's invasion of Ukraine, pushing again for countries to become less reliant on China for critical components like semiconductors.
Speaking at an LG Group research facility in South Korea's largest city and capital, Ms. Yellen explored so-called "friend-shoring," a proposed paradigm shift that would have the U.S. and its allies trade more closely with one another and less with geopolitical rivals. Supply disruptions during the Covid-19 pandemic, as well as the war in Ukraine, have exposed the danger of depending too heavily on a single producer, Ms. Yellen said.
RBA Warns Unanchored Inflation Expectations Would Stoke Rates Pain
SYDNEY-The Reserve Bank of Australia is keeping a watchful eye on medium- and long-term inflation expectations, warning in minutes of its July 5 policy meeting that the cost to economic growth and employment in the long run will be higher if the psychology around inflation deteriorates.
The minutes, published Tuesday, also show that the RBA is currently happy that long-term inflation expectations remain largely anchored.
Russia Orders Troops to Target Ukraine's Western-Supplied Long-Range Weapons
KYIV, Ukraine-Russia ordered its forces to target the long-range missiles and artillery weapons that Western countries have recently supplied to Ukraine, a sign of how Kyiv's additional firepower has begun to reshape the conflict.
On Monday, Defense Minister Sergei Shoigu told a group of Russian troops to make Ukraine's long-range weaponry a priority target to prevent shelling in parts of eastern Ukraine held by Russian forces, according to the Russian Defense Ministry.
Ukraine's First Lady Olena Zelenska in Washington for High-Level Meetings
WASHINGTON-Ukraine's first lady Olena Zelenska opened a round of high-level meetings Monday in Washington that includes a planned address to Congress later in the week.
Ms. Zelenska met Monday with Secretary of State Antony Blinken and is due to hold talks with first lady Jill Biden on Tuesday, the White House said. Ms. Zelenska is scheduled to deliver remarks to lawmakers on Wednesday at 11 a.m., House Speaker Nancy Pelosi (D., Calif.) said.
Write to firstname.lastname@example.org TODAY IN CANADA
Stocks to Watch:
Cathedral Energy Services Names Chad Robinson as New CFO, Replacing Ian Graham; Outgoing CFO to Leave Company; Robinson Joined Co via Lexa Drilling Technologies Acquisition; Cathedral Energy Also Appointing Vaughn Spengler as VP of Canadian Ops
Cathedral Energy Also Announcing Purchase of Final 9.02% Stake in Lexa From Cathedral Director; Lexa Shares Being Purchased on Same Terms as Other 90.98% Bought Previously; Cathedral Energy Will Issue 159,836 Common Shrs as Consideration; Cathedral Energy Announced Acquisition of Downhole Tech Co Lexa in June
Expected Major Events for Tuesday
06:00/UK: Jun UK monthly unemployment figures
12:30/US: Jun New Residential Construction - Housing Starts and Building Permits
12:55/US: 07/16 Johnson Redbook Retail Sales Index
20:30/US: 07/15 API Weekly Statistical Bulletin
23:01/UK: Jun Scottish Retail Sales Monitor
All times in GMT. Powered by Onclusive and Dow Jones.
Expected Earnings for Tuesday
Aehr Test Systems (AEHR) is expected to report for 4Q.
Ally Financial Inc (ALLY) is expected to report $1.85 for 2Q.
(MORE TO FOLLOW) Dow Jones Newswires
July 19, 2022 05:04 ET (09:04 GMT)Copyright (c) 2022 Dow Jones & Company, Inc.