When a computer can figure out whether a movie trailer is going to positively affect an audience or not – it makes you wonder how close we are to computer generated predictions on everything else in life. The short answer, according to Michael Karasick, IBM’s VP and Research Director at Almaden Labs, is that IBM ’s Watson is already making them. Since conquering “Jeopardy” and Chess, Watson has been focused on predictive healthcare, customer service, investment advice and culinary pursuits. But they are not stopping there, IBM is allowing select customers to use “Watson as a service” and may soon open it up to developers to build Watson apps.
Yes, the Watson technology is still maturing, but I am convinced that within five years the Watson platform will learn faster and make better predictions with each new field it understands. That’s because, as Karasick told me, “If you train a system like Watson on domain A and domain B, then it knows how to make the equivalence between terminologies in different domains.” That means as Watson solves problems in chemistry; it can generate probable solutions in Physics and Metallurgy too.
Imagine how this might be applied to marketing. By using Watson as a service, a business could train Watson to understand its customers, then use predictive models to recognize new products or services that their customers will buy.
Predict new trends and shifting tastes
Watson is a voracious consumer of data, and it doesn’t forget anything. You can feed it data from credit cards, sales databases, social networks, location data, web pages and it can compile and categorize that information to make high probability predictions.
And most shockingly, Watson is well ahead of its competitors in sentiment analysis. According to Karasick, Watson can recognize irony and sarcasm - and properly apprehend the intended meaning. That means Watson can quickly analyze large sample sizes to determine whether a movie trailer, product offering or clothing line are going to work with consumers.
Analyze social conversations – generate leads
Most social listening solutions on the market today do an adequate job of giving the marketer signals and reports about their industry, competitors, partners and current customers. But it’s up to the marketer to analyze the information and take action.
As Watson has demonstrated in other domains, it can foreseeably predict what information is most important and make recommendations on how to act on it. For example, if it finds a cluster of people discussing problems that the marketer’s solution solves, Watson can automatically notify the sales team or take action on its own to educate the prospective customers.
Determine whether a new innovation will sell or not
Because Watson can learn from one domain of knowledge and make high probability predictions in another, it’s reasonable to assume that if a company wanted to understand whether a new innovation will sell or not, Watson could analyze a company’s current market and customer base to provide success probabilities.
We’re a long way off from a Watson with the taste of a Steve Jobs, but if it has enough understanding of the situation, it can produce insights that can provide companies a clearer picture of the opportunities and threats.
Computer calculated and automated growth hacking
If you’re a marketer and not familiar with growth hacking, please study up fast. Growth hackers focus on innovative A/B testing techniques to maximize conversions on emails, websites, social media, online content or just about any digital media available to them. It’s a low cost but more effective alternative to traditional media.
I can see how Watson could proactively and intelligently test, measure and optimize digital content, ads, website pages even a company’s product to efficiently maximize customer growth. Andy Johns of Greylock, formerly a growth hacker for Facebook , Twitter and Quora told me that Facebook conducted 6 hacks a day to maximize growth opportunities. I suspect Watson could easily handle 10 times that amount.
This clearly is the digital march of progress. Watson has the potential to eliminate ineffective marketing, Boost good marketing to great marketing, and to predict how to better spend marketing dollars in the future.
Put it all together and you’ve revolutionized marketing.
The average cost of a data breach is $3.86 million globally. That is a 6.4 percent increase from 2017, a newly released report revealed.
The Cost of a Data Breach 2018 conducted by Ponemon Institute and sponsored by IBM Security found the cost of data breaches on a business’ bottom line has been steadily increasing over the past five years. The report is based on in-depth interviews with nearly 500 companies who experienced a data breach in the past year, and analysis from data breaches of around 2,500 to 100,000 stolen records, IBM explained.
According to the report, a data breach is calculated based on activity-based costing, a methodology that “identifies activities and assigns a cost based on real use,” the report stated. Factors that impact the cost of a data breach include: unexpected loss of customers, size of breach, time it takes to detect and respond and management.
“The goal of our research is to demonstrate the value of good data protection practices, and the factors that make a tangible difference in what a company pays to resolve a data breach,” said Larry Ponemon, chairman and founder of Ponemon Institute. “While data breach costs have been rising steadily over the history of the study, we see positive signs of cost savings through the use of newer technologies as well as proper planning for incident response, which can significantly reduce these costs.”
The 2018 report also looks at mega breaches for the first time. A mega breach, according to IBM, ranges from 1 million to 50 million records lost, costing companies between $40 million and $350 million in losses. The report found that the amount of mega beaches has almost doubled in the last five years.
Ponemon Institute analyzed 11 companies who experienced mega breaches within the past two years and found the average cost of a mega breach consisting of 1 million lost records is $40 million while 50 million compromised records is $350 million. Ten out of 11 of the companies reported the breaches came from malicious and criminal attacks rather than glitches or human errors. Additionally, the report found the average time to detect and contain a mega breach is 365 days. “For mega breaches, the biggest expense category was costs associated with lost business, which was estimated at nearly $118 million for breaches of 50 million records – almost a third of the total cost of a breach this size,” IBM wrote in an announcement.
For data breaches of 100,000 compromised records or less, the report found data breach costs are heavily associated with the time it spends to contain a data breach as well as the investment into technologies that speed up response time, IBM explained. The report found the average time to identify a breach is 197 days, and 69 days to contain the breach.
Other findings included: It costs $148 on average per lost or stolen record and the likelihood of a organization suffering from another breach in the next two years is about 28 percent.
“While highly publicized data breaches often report losses in the millions, these numbers are highly variable and often focused on a few specific costs which are easily quantified,” said Wendi Whitmore, global lead for IBM X-Force Incident Response and Intelligence Services (IRIS). “The truth is there are many hidden expenses which must be taken into account, such as reputational damage, customer turnover, and operational costs. Knowing where the costs lie, and how to reduce them, can help companies invest their resources more strategically and lower the huge financial risks at stake.”
IBM has announced the acquisition of data observability software vendor Databand.ai. Today’s announcement marks IBM’s fifth acquisition of 2022. The company says the acquisition “further strengthens IBM’s software portfolio across data, AI, and automation to address the full spectrum of observability and helps businesses ensure that trustworthy data is being put into the right hands of the right users at the right time.”
Data observability is an expanding sector in the big data market, spurred by explosive growth in the amount of data organizations are producing and managing. Data quality issues can arise with large volumes, and Gartner shows that poor data quality costs businesses $12.9 million a year on average.
“Data observability takes traditional data operations to the next level by using historical trends to compute statistics about data workloads and data pipelines directly at the source, determining if they are working, and pinpointing where any problems may exist,” said IBM in a press release. “When combined with a full stack observability strategy, it can help IT teams quickly surface and resolve issues from infrastructure and applications to data and machine learning systems.”
IBM says this acquisition will extend Databand.ai’s resources for expanding its observability capabilities for broader integration across more open source and commercial solutions, and enterprises will have flexibility in how they run Databand.ai, either with a subscription or as-a-Service.
IBM has made over 25 strategic acquisitions since Arvind Krishna took the helm as CEO in April 2020. The company mentions that Databand.ai will be used with IBM Observability by Instana APM, another observability acquisition, and IBM Watson Studio, its data science platform, to address the full spectrum of observability across IT operations. To provide a more complete view of a data platform, Databand.ai can alert data teams and engineers when data they are working with is incomplete or missing, while Instana can explain which application the missing data originates from and why the application service is failing.
“Our clients are data-driven enterprises who rely on high-quality, trustworthy data to power their mission-critical processes. When they don’t have access to the data they need in any given moment, their business can grind to a halt,” said Daniel Hernandez, General Manager for Data and AI, IBM. “With the addition of Databand.ai, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning, and is continuing to provide our clients and partners with the technology they need to deliver trustworthy data and AI at scale.”
Databand.ai is headquartered in Tel Aviv, and its employees will join IBM’s Data and AI division to grow its portfolio of data and AI products, including Watson and IBM Cloud Pak for Data.
“You can’t protect what you can’t see, and when the data platform is ineffective, everyone is impacted –including customers,” said Josh Benamram, co-founder and CEO of Databand.ai. “That’s why global brands such as FanDuel, Agoda and Trax Retail already rely on Databand.ai to remove bad data surprises by detecting and resolving them before they create costly business impacts. Joining IBM will help us scale our software and significantly accelerate our ability to meet the evolving needs of enterprise clients.”
VCs Open Up the Checkbook for Observability Startups
Building Continuous Data Observability at the Infrastructure Layer
Data Quality Study Reveals Business Impacts of Bad Data
The O’Reilly Open Source Software Conference (OSCON) is taking place this week in Oregon, gathering together industry leaders to talk about open source, cloud native, data-driven solutions, AI capabilities and product management.
“OSCON has continued to be the catalyst for open source innovation for twenty years, providing organizations with the latest technological advances and guidance to successfully implement the technology in a way that makes sense for them,” said Rachel Roumeliotis, vice president of content strategy at O’Reilly and OSCON program chair. “To keep OSCON at the forefront of open source innovation for the next twenty years, we’ve shifted the program to focus more on software development with syllabus such as cloud-native technologies. While not all are open source, they allow software developers to thrive and stay ahead of these shifts.”
A number of companies are also taking OSCON as an opportunity to release new software and solutions. Announcements included:
IBM’s Data Asset eXchange (DAX)
DAX is an online hub designed to provide developers and data scientists a place to discover free and open datasets under open data licenses. The datasets will use the Linux Foundation’s Community Data License Agreement when possible, and integrate with IBM Cloud and AI services. IBM will also provide new datasets to the online hub regularly.
“For developers, DAX provides a trusted source for carefully curated open datasets for AI. These datasets are ready for use in enterprise AI applications, with related content such as tutorials to make getting started easier,” the company wrote in a post.
DAX joins IBM’s other initiatives to help data scientists and developers discover and access data. IBM Model Asset eXchange (MAX) is geared towards machine learning and deep learning models. The company’s Center for Open-Source Data and AI Technologies will work to make it easier to use DAX and MAX assets.
New open-source projects
IBM also announced a new open-source project designed for Kubernetes. Kabanero is meant to help developers build cloud-native apps. It features governance and compliance capabilities and the ability to architect, build, deploy and manage the lifecycle of a Kubernetes-based app, IBM explained.
“Kabanero takes the guesswork out of Kubernetes and DevOps. With Kabanero, you don’t need to spend time mastering DevOps practices and Kubernetes infrastructure syllabus like networking, ingress and security. Instead, Kabanero integrates the runtimes and frameworks that you already know and use (Node.js, Java, Swift) with a Kubernetes-native DevOps toolchain. Our pre-built deployments to Kubernetes and Knative (using Operators and Helm charts) are built on best practices. So, developers can spend more time developing scalable applications and less time understanding infrastructure,” Nate Ziemann, product manager at IBM, wrote in a post.
The company also announced Appsody, an open source project to help with cloud-native apps in containers; Codewind, an IDE integration for cloud-native development; and Razee, a project for multi-cluster continuous delivery tooling for Kubernetes.
“As companies modernize their infrastructure and adopt a hybrid cloud strategy, they’re increasingly turning to Kubernetes and containers. Choosing the right technology for building cloud-native apps and gaining the knowledge you need to effectively adopt Kubernetes is difficult. On top of that, enabling architects, developers, and operations to work together easily, while having their individual requirements met, is an additional challenge when moving to cloud,” Ziemann wrote.
WSO2 API Micrograteway 3.0 announced
WSO2 is introducing a new version of its WSO2 API MIcrogateway focused on creating, deploying and securing APIs within distributed microservices architectures. The latest release features developer-first runtime generation, runt-time service discovery, support for composing multiple microservices, support for transforming legacy API formats and separation of the WSO2 API Microgateway toolkit.
“API microgateways are a key part of building resilient, manageable microservices architectures,” said Paul Fremantle, WSO2 CTO and co-founder. “WSO2 API Microgateway 3.0 fits effectively into continuous development practices and has the proven scalability and robustness for mission-critical applications.”
Carbon Relay’s new AIOps platform
Red Sky Ops is a new open-source AIOps platform to help organizations with Kubernetes initiatives as well as deploy, scale and manage containerized apps. According to Carbon Relay, this will help DevOps teams manage hundreds of app variables and configurations. The solution uses machine learning to study, replicate and stress-test app environments as well as configure, schedule and allocate resources.
Carbon Relay has also announced it will be joining the Cloud Native Computing Foundation to better support the Kubernetes community and the use of cloud native technologies.
60% of breached businesses raised product prices post-breach; vast majority of critical infrastructure lagging in zero trust adoption; $550,000 in extra costs for insufficiently staffed businesses
CAMBRIDGE, Mass., July 27, 2022 /PRNewswire/ -- IBM (NYSE: IBM) Security today released the annual Cost of a Data Breach Report,1 revealing costlier and higher-impact data breaches than ever before, with the global average cost of a data breach reaching an all-time high of $4.35 million for studied organizations. With breach costs increasing nearly 13% over the last two years of the report, the findings suggest these incidents may also be contributing to rising costs of goods and services. In fact, 60% of studied organizations raised their product or services prices due to the breach, when the cost of goods is already soaring worldwide amid inflation and supply chain issues.
The perpetuality of cyberattacks is also shedding light on the "haunting effect" data breaches are having on businesses, with the IBM report finding 83% of studied organizations have experienced more than one data breach in their lifetime. Another factor rising over time is the after-effects of breaches on these organizations, which linger long after they occur, as nearly 50% of breach costs are incurred more than a year after the breach.
The 2022 Cost of a Data Breach Report is based on in-depth analysis of real-world data breaches experienced by 550 organizations globally between March 2021 and March 2022. The research, which was sponsored and analyzed by IBM Security, was conducted by the Ponemon Institute.
Some of the key findings in the 2022 IBM report include:
Critical Infrastructure Lags in Zero Trust – Almost 80% of critical infrastructure organizations studied don't adopt zero trust strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared to those that do. All while 28% of breaches amongst these organizations were ransomware or destructive attacks.
It Doesn't Pay to Pay – Ransomware victims in the study that opted to pay threat actors' ransom demands saw only $610,000 less in average breach costs compared to those that chose not to pay – not including the cost of the ransom. Factoring in the high cost of ransom payments, the financial toll may rise even higher, suggesting that simply paying the ransom may not be an effective strategy.
Security Immaturity in Clouds – Forty-three percent of studied organizations are in the early stages or have not started applying security practices across their cloud environments, observing over $660,000 on average in higher breach costs than studied organizations with mature security across their cloud environments.
Security AI and Automation Leads as Multi-Million Dollar Cost Saver – Participating organizations fully deploying security AI and automation incurred $3.05 million less on average in breach costs compared to studied organizations that have not deployed the technology – the biggest cost saver observed in the study.
"Businesses need to put their security defenses on the offense and beat attackers to the punch. It's time to stop the adversary from achieving their objectives and start to minimize the impact of attacks. The more businesses try to perfect their perimeter instead of investing in detection and response, the more breaches can fuel cost of living increases." said Charles Henderson, Global Head of IBM Security X-Force. "This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked."
Over-trusting Critical Infrastructure Organizations
Concerns over critical infrastructure targeting appear to be increasing globally over the past year, with many governments' cybersecurity agencies urging vigilance against disruptive attacks. In fact, IBM's report reveals that ransomware and destructive attacks represented 28% of breaches amongst critical infrastructure organizations studied, highlighting how threat actors are seeking to fracture the global supply chains that rely on these organizations. This includes financial services, industrial, transportation and healthcare companies amongst others.
Despite the call for caution, and a year after the Biden Administration issued a cybersecurity executive order that centers around the importance of adopting a zero trust approach to strengthen the nation's cybersecurity, only 21% of critical infrastructure organizations studied adopt a zero trust security model, according to the report. Add to that, 17% of breaches at critical infrastructure organizations were caused due to a business partner being initially compromised, highlighting the security risks that over-trusting environments pose.
Businesses that Pay the Ransom Aren't Getting a "Bargain"
According to the 2022 IBM report, businesses that paid threat actors' ransom demands saw $610,000 less in average breach costs compared to those that chose not to pay – not including the ransom amount paid. However, when accounting for the average ransom payment, which according to Sophos reached $812,000 in 2021, businesses that opt to pay the ransom could net higher total costs - all while inadvertently funding future ransomware attacks with capital that could be allocated to remediation and recovery efforts and looking at potential federal offenses.
The persistence of ransomware, despite significant global efforts to impede it, is fueled by the industrialization of cybercrime. IBM Security X-Force discovered the duration of studied enterprise ransomware attacks shows a drop of 94% over the past three years – from over two months to just under four days. These exponentially shorter attack lifecycles can prompt higher impact attacks, as cybersecurity incident responders are left with very short windows of opportunity to detect and contain attacks. With "time to ransom" dropping to a matter of hours, it's essential that businesses prioritize rigorous testing of incident response (IR) playbooks ahead of time. But the report states that as many as 37% of organizations studied that have incident response plans don't test them regularly.
Hybrid Cloud Advantage
The report also showcased hybrid cloud environments as the most prevalent (45%) infrastructure amongst organizations studied. Averaging $3.8 million in breach costs, businesses that adopted a hybrid cloud model observed lower breach costs compared to businesses with a solely public or private cloud model, which experienced $5.02 million and $4.24 million on average respectively. In fact, hybrid cloud adopters studied were able to identify and contain data breaches 15 days faster on average than the global average of 277 days for participants.
The report highlights that 45% of studied breaches occurred in the cloud, emphasizing the importance of cloud security. However, a significant 43% of reporting organizations stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, observing higher breach costs2. Businesses studied that did not implement security practices across their cloud environments required an average 108 more days to identify and contain a data breach than those consistently applying security practices across all their domains.
Additional findings in the 2022 IBM report include:
Phishing Becomes Costliest Breach Cause – While compromised credentials continued to reign as the most common cause of a breach (19%), phishing was the second (16%) and the costliest cause, leading to $4.91 million in average breach costs for responding organizations.
Healthcare Breach Costs Hit Double Digits for First Time Ever– For the 12th year in a row, healthcare participants saw the costliest breaches amongst industries with average breach costs in healthcare increasing by nearly $1 million to reach a record high of $10.1 million.
Insufficient Security Staffing – Sixty-two percent of studied organizations stated they are not sufficiently staffed to meet their security needs, averaging $550,000 more in breach costs than those that state they are sufficiently staffed.
To obtain a copy of the 2022 Cost of a Data Breach Report, please visit: https://www.ibm.com/security/data-breach.
Read more about the report's top findings in this IBM Security Intelligence blog.
Sign up for the 2022 IBM Security Cost of a Data Breach webinar on Wednesday, August 3, 2022, at 11:00 a.m. ET here.
Connect with the IBM Security X-Force team for a personalized review of the findings: https://ibm.biz/book-a-consult.
About IBM Security
IBM Security offers one of the most advanced and integrated portfolios of enterprise security products and services. The portfolio, supported by world-renowned IBM Security X-Force® research, enables organizations to effectively manage risk and defend against emerging threats. IBM operates one of the world's broadest security research, development, and delivery organizations, monitors 150 billion+ security events per day in more than 130 countries, and has been granted more than 10,000 security patents worldwide. For more information, please check www.ibm.com/security, follow @IBMSecurity on Twitter or visit the IBM Security Intelligence blog.
IBM Security Communications
1 Cost of a Data Breach Report 2022, conducted by Ponemon Institute, sponsored, and analyzed by IBM
2 Average cost of $4.53M, compared to average cost $3.87 million at participating organizations with mature-stage cloud security practices
Blockchain technology has great potential in various application areas, such as banking, cybersecurity, and IoT. Furthermore, with the widespread use of mobile devices and internet penetration across the globe, individuals are progressively inclined towards Blockchain. These are changing trends and distributed IT environments have made organizations susceptible to privacy concerns, fueling the demand for blockchain solutions. As a result, the USA's need for custom, cutting-edge, and futuristic blockchain applications increases.
With significant market growth, the number of service providers has also increased. Are you looking for the best blockchain development services in the United States? However, not all blockchain development companies in the United States can produce high bespoke bitcoin apps. As a result, we conducted extensive research on thousands of companies from New York, California, Texas, Illinois, Massachusetts, and many other states to compile a list of the top blockchain developers in the United States for 2022 and coming years.
List of Top 10 Blockchain Development Companies In USA 2022 | Blockchain Developers in USA 2022
1. Hyperlink InfoSystem
Hyperlink InfoSystem is a top web & mobile app development company. The company renders the best blockchain-based solutions to different industries. They develop blockchain products to help solve real-world problems that enterprises face. Blockchain Solutions of the company are trustworthy, secure & transparent. They have custom development modules that can customize depending on the client's requirements. The company delivers the best web & app development, AI solutions, AR/VR, Salesforce development, Big Data Analytics, IoT development, Blockchain, CRM Solutions, and much more.
Fueled specializes in designing and developing award-winning mobile apps and websites that are fast, attractive, responsive, and easy to use. But we're not here to work on just any apps or for any client. Instead, we come to work in the morning on the most interesting projects with the best clients, whether for startups or big brands, because an unwavering passion unites us for quality. It's what makes us tick.
3. IBM iX
IBM iX is a technology firm that uses IBM Design Thinking to bring unique and forward-thinking ideas to the table — not only for design but also for addressing business challenges. They develop proficiency in today's evolving technologies in a continually changing terrain. Their distinction isn't in their technical understanding; it's in how they use technology to solve business challenges.
4. Software Pro
Located in the Heart of Las Vegas, Nevada, with two international offices, software pro is among the best & top software development companies in the United States, helping global clients. Software Pro is the best team of software developers, custom application development experts, UX/UI Specialists, & certified web developers.
SimTLiX is a software development company specializing in the simplicity of emerging technologies to meet the demands of businesses. The firm arose from a collaboration between experts from some of the world's most prominent multinational corporations and committed university scholars who have spent years studying and defining future technologies and development methods.
ConsenSys is a blockchain venture production studio. Their global team builds an ecosystem of consumer-centric products and enterprise solutions using blockchain technologies, primarily Ethereum. They are a global formation of technologists and entrepreneurs building the infrastructure, applications, and practices that enable a decentralized world.
Synechron has been on a steadily upward track since 2001. It has offices in the United States, Canada, the United Kingdom, the Netherlands, Ireland, Germany, Switzerland, Luxembourg, Italy, France, Serbia, Australia, Hong Kong, Singapore, Japan, the Philippines, and the United Arab Emirates, as well as development centres in Pune, Bangalore, and Hyderabad, India.
8. Concurrency Inc
Clients choose Concurrency Inc because of its collaborative approach, top talent, project-scoped work plans, and commercial value. Concurrency focuses on helping customers with more value from their IT investments by developing creative solutions that incorporate people, process, and technology demands across all technologies and business needs.
Finance, banking, supply chain, healthcare, transportation, digital identification, and advertising are among the areas for which TheBlockBox develops end-to-end blockchain solutions. They will work together to determine whether Blockchain is the ideal technology for your concept or project and how to get the most out of it.
10. HData Systems
HData Systems is an India-based Data Science firm that helps businesses increase their productivity and performance with analytical approaches. The company provides app development, big data analytics, data science, AI, machine learning, automation, etc.
Source: Top Blockchain Development Companies in the USA
Data breaches are costing Canadian companies millions of dollars each, according to IBM’s 2022 Cost of a Data Breach report.
The study examined 25 data breaches in Canada over eight years and found companies paid an average of $7.05 million per incident this year. The figure increased from the $6.75 million reported in 2021.
IBM says breaches contribute to higher costs for goods and services through hidden cyber taxes companies add. For example, the rise in cost for a particular item could be linked to several cyber incidents across the item’s supply chain, from the manufacturer to logistics and transportation companies.
Companies in the financial sector are paying the highest cost for breaches at roughly $520 per record. The technology industry is second, paying $433 a record. The services industry rounds out the top three, paying $362 a record.
The report found that stolen or compromised user credentials were the most common method attackers used to target organizations.
Furthermore, companies that end up paying cyber criminals put themselves in a vulnerable position, as they are more likely to be targeted again in the future.
The study also found companies that consistently utilized security measures paid less per breach, at $4.31 million, compared to $8.09 million by companies that didn’t.
“Businesses need to put their security defences on the offence and beat attackers to the punch,” Charles Henderson, global head of IBM Security X-Force, said.
“The more businesses try to perfect their perimeter instead of investing in detection and response, the more breaches can fuel cost of living increases.”
Image credit: Shutterstock
IBM Wednesday unveiled the purchase of Databand.ai, an Israel-based developer of a proactive data observability platform, which claims to catch bad data before it impacts a customer’s business.
Financial details of the acquisition, which actually closed 27 June, were not disclosed by IBM.
The term “data observability,” as defined by Databand.ai, is the blanket term for understanding the health and the state of data in a system that allows a business to identify, troubleshoot, and resolve data issues in near real-time.
Observability helps not only describe a problem for engineers, but also provides the context to resolve the problem and look at ways to prevent the error from happening again, according to Databand.ai.
“The way to achieve this is to pull best practices from DevOps and apply them to Data Operations. All of that to say, data observability is the natural evolution of the data quality movement, and it’s making DataOps as a practice possible,” the company said.
IBM expects Databand.ai to strengthen its data, AI, and automation software portfolio to ensure trustworthy data goes to the right user at the right time. The company also expects the acquisition to help Databand.ai take advantage of IBM’s own R&D investments and other IBM acquisitions.
IBM, citing Gartner, said that poor data quality costs organizations an average of $12.9 million every year while increasing the complexity of data ecosystems and leading to poor decision making.
That makes this an exciting acquisition, said Mike Gilfix, vice president of product management for data and AI at IBM.
“Bad data is expensive,” Gilfix told CRN US. “We’re excited about the fast-growing data observability market. We know when data stops, companies lose business. If you depend on data to run your company, and that data is corrupt or has other issues, we want Databand.ai to help find the issues and resolve them faster.”
Databand.ai is part of a three-legged way to bring observability to businesses, Gilfix said.
Databand.ai is focused on data observability, and is important to ensuring that data pipelines work as promised, he said.
The second leg is IBM Observability by Instana APM, which IBM acquired in late 2020. Instana brings observability specifically to applications by observing the makeup of the application and the performance of the app itself, he said.
The third is IBM Watson Studio, which brings observability to AI models, he said.
For IBM, Databand.ai is also an important component in a data fabric, which Gilfix defined as an architectural approach that enables consumers of data including engineers to access data, discover it, catch it, build data pipelines, and protect data across multiple data silos, Gilfix said.
“Many companies struggle with data silos,” he said. “A data fabric is a good way to connect those silos together.”
IBM’s indirect channel partners are an important part of the company’s observability business, and Databand.ai will be no different once it is integrated, Gilfix said.
“The channel is a big part of our business,” he said. “We believe a rich ecosystem is critical. Partners add expertise, make sure customers are successful, and bring in their own value adds to be even more successful.”
This article originally appeared at crn.com
SAN FRANCISCO, July 28, 2022 — Fiddler today announced major improvements to its MPM platform, including model ingestion at giga-scale, natural language processing (NLP) and computer vision (CV) monitoring, class imbalance, and an intuitive and streamlined user experience. With these new features, the Fiddler MPM platform is delivering a deeper understanding of unstructured model behavior and performance, and enhanced scalability, discoverability of rare and nuanced model drifts, and ease of use.
According to IBM’s Global AI Adoption Index 2022, a majority of organizations have not taken key steps to ensure their AI is trustworthy and responsible. Such steps include reducing bias (74%), tracking performance variations and model drift (68%), and making sure they can explain AI-powered decisions (61%). By operationalizing machine learning in a safe and trustworthy way and going beyond metrics to explain results, the Fiddler MPM platform makes more transparent AI models a reality.
The combined power of the new Fiddler capabilities will help enterprises across finance, banking, insurance, healthcare, defense, criminal justice, retail, and travel advance their AI efforts and deliver responsible models. Specifically, Fiddler has added capabilities to empower global organizations and Fortune 500 companies to:
Fiddler has also created a single pane of glass UI, providing data science and MLOps teams with command center-like visibility into every model’s behavior and performance across training and production. Teams can view, prioritize, and manage updates, alerts, versions, traffic, and drifts from a centralized homepage. Additionally, customers can now choose where to deploy their models to fit their company’s needs – whether that be on-premises, cloud, or both.
“AI model development is pivotal to our company’s ability to hire, vet, and match the world’s top software development and engineering talent with quality job opportunities,” said Jonathan Siddharth, CEO and Founder, Turing, a Fiddler customer. “We have invested in Fiddler to monitor models and ensure that our AI is consistently fair and explainable. Their platform and new enhancements demonstrate the path forward to making responsible AI a reality for every organization.”
“In order for organizations to be successful with AI, it is critical they ensure that underlying machine learning models are robust to shifts in the data, are not relying on spurious features, and are not unduly discriminating against minority groups,” said Krishna Gade, Founder and CEO, Fiddler. “Our new capabilities provide enterprises with even greater visibility throughout the entire model lifecycle. The ability to understand and explain unstructured data and discover rare but costly model drifts is game changing, and opens up tremendous AI opportunities across a plethora of use cases and a diverse set of industries. With Fiddler, organizations can deliver enterprise-scale AI models that power greater business results and responsible and fair outcomes for consumers.”
To learn more about Fiddler, visit: www.fiddler.ai.
Fiddler is a pioneer in Model Performance Management for responsible AI. The Fiddler platform’s unified environment provides a common language, centralized controls, and actionable insights to operationalize ML/AI with trust. Model monitoring, explainable AI, analytics, and fairness capabilities address the unique challenges of building in-house stable and secure MLOps systems at scale.
Unlike observability solutions, Fiddler integrates deep XAI and analytics to help you grow into advanced capabilities over time and build a framework for responsible AI practices. Fortune 500 organizations use Fiddler across training and production models to accelerate AI time-to-value and scale, build trusted AI solutions, and increase revenue.