Money back guarantee of 00M-624 brain dumps at killexams.com

With all the assistance of the particularly tested killexams.com IBM IBM Risk Analytics for Governance Risk and Compliance Sales Mastery Test v1 brain dumps and questions answers you may figure out just how to create your own 00M-624 knowledge. Our 00M-624 pdf download are usually updated also to the purpose. The IBM 00M-624 dump make your own vision tremendous plus help you extremely in planning associated with the 00M-624 exam.

Exam Code: 00M-624 Practice test 2022 by Killexams.com team
IBM Risk Analytics for Governance Risk and Compliance Sales Mastery Test v1
IBM Governance guide
Killexams : IBM Governance guide - BingNews https://killexams.com/pass4sure/exam-detail/00M-624 Search results Killexams : IBM Governance guide - BingNews https://killexams.com/pass4sure/exam-detail/00M-624 https://killexams.com/exam_list/IBM Killexams : Why your business' data strategy probably needs an overhaul

By Dinesh Nirmal, General Manager, Data, AI and Automation, IBM

The age of data-driven work is here. Virtually all employees today say they are expected to work with data on the job to some extent, from retail employees needing to keep tabs on sales and supply chains to professional athletes using data to fine tune their game.

Despite the expectation for employees, most people have a really hard time getting the data they need. Research shows that up to 82% of enterprises are inhibited by data silos, meaning data that's left in inaccessible repositories and databases. Simply put, the prevalence of these data silos can make working with data a real pain, a fact which perhaps helps explain why employees spend an average of one hour per week procrastinating on data-related tasks alone.

As more people need access to data – and the volume of data within an organization continues to dramatically grow – the task of governing that data, ensuring it is secure and compliant with privacy standards, is increasingly difficult.

Unlocking Innovation

It doesn't have to be this way. A clear data strategy defines how to make sense of vast amounts of data, align data initiatives to business strategy, and build solutions that span the entire organization. It helps organizations realize their data's potential and gather insights and identify efficiencies while complying with increasingly complex regulations.

But a critical part of implementing a data strategy vision is getting the right technology in place. That's exactly why companies around the world are investing heavily in a solution that can weave their disparate data sources together, also known as a data fabric. A data fabric is a type of data architecture that automates data discovery, governance, and consumption, allowing enterprises to elevate the value of their data by providing access to the right data, at the right time, regardless of where it resides. 

Research shows organizations that have adopted a data fabric architecture are significantly more innovative. Companies that have actively deployed and are seeing value from AI, for example, are 283% more likely to have a data fabric architecture in place than those that have not.

Making Work Easier

A data fabric unlocks innovation because it makes employees' jobs easier. With a data fabric, employees don't need to spend time hunting down, validating, and verifying the data they need to do their jobs. A data fabric architecture is also critical for feeding the models, AI, and automation needed to eliminate repetitive, low-value tasks and divert resources to more interesting, value-adding work.

A data fabric also enables a comprehensive approach to privacy and data access. Consumers need assurances that their data is being protected and companies are not running the risk of breaches, abuses, or other misuse that risks damage to their reputation.

The data-driven enterprise is more than just a buzzword – access to quality data can be transformative and differentiating. But ensuring access to quality data remains challenging for numerous reasons, and more employees need access to more data than ever before. A data strategy that takes advantage of data fabric architecture is the best way to democratize access to data so that data consumers —whether they are in human resources, customer service, manufacturing, or marketing — can get the data they need, while empowering IT teams to work smarter, not harder. 

Optimize your data strategy and democratize data access with a data fabric. Check out IBM's new guide for data leaders here.

This post was created by IBM with Insider Studios.

Tue, 09 Aug 2022 05:10:00 -0500 en-US text/html https://www.businessinsider.com/sc/why-your-data-strategy-probably-needs-an-overhaul
Killexams : Cloud-based Information Governance Market Located Worldwide Trends and Application – EMC, HP Autonomy, IBM, Symantec

The MarketWatch News Department was not involved in the creation of this content.

Aug 05, 2022 (Market Insight Reports) -- Overview of the Global Cloud-based Information Governance Market:

The Cloud-based Information Governance Market Report 2022 report provides the latest industry data and future industry trends. The report lists leading competitors and manufacturers in the Cloud-based Information Governance industry and provides strategic industry insights and analysis of factors influencing the competitiveness of the market. The geographical scope of the Cloud-based Information Governance market is studied. The forecast market information, SWOT analysis, market scenario, and feasibility study are the vital aspects analyzed in this report.

Looking forward, Market Intelligence Data Group expects the market to grow at a CAGR of 12.6% during 2022-2028.

Request demo Copy of this Report:

https://www.marketintelligencedata.com/reports/4306259/global-cloud-based-information-governance-market-insights-forecast-to-2028/inquiry?mode=DiVya

Leading Players in the Cloud-based Information Governance Market- EMC, HP Autonomy, IBM, Symantec, AccessData, Amazon, BIA, Catalyst, Cicayda, Daegis, Deloitte, Ernst & Young, FTI, Gimmal, Google, Guidance Software, Index Engines, Iron Mountain, Konica Minolta, Kroll Ontrak, Microsoft, Mimecast, Mitratech, Proofpoint, R and other.

The leading players of the Cloud-based Information Governance industry, their market share, product portfolio, company profiles are covered in this report. The leading market players are analyzed based on production volume, gross margin, market value, and price structure. The competitive market scenario among Cloud-based Information Governance players will help the industry aspirants in planning their strategies. The statistics offered in this report will be a precise and useful guide to shape business growth.

Global Cloud-based Information Governance Market Segmentation:

Market Segmentation: By Application

BFSI

Public

Retail

Manufacturing

IT And Telecom

Healthcare

Others

Market Segmentation: By Type

Simple Storage And Retrieval

Basic Document Management

Complex Document Management

Functional Applications With Document Storage

Social Networking Applications With Document Storage

Get Special pricing with up to 30% Discount on the first purchase of this report:

https://www.marketintelligencedata.com/report/purchase/4306259?mode=su?mode=DiVya

Regional and Country-level Analysis:

The key regions covered in the Cloud-based Information Governance market report are North America,

Europe, Asia Pacific, Latin America, Middle East and Africa. It also covers

key regions (countries), viz, U.S., Canada, Germany, France, U.K., Italy,

Russia, China, Japan, South Korea, India, Australia, Taiwan, Indonesia,

Thailand, Malaysia, Philippines, Vietnam, Mexico, Brazil, Turkey, Saudi Arabia,

U.A.E, etc.

Key questions answered in the report include:

  1. What will the market size and the growth rate be in 2028?
  2. What are the key factors driving the Global Cloud-based Information Governance Market?
  3. What are the key market trends impacting the growth of the Global Cloud-based Information Governance Market?
  4. What are the challenges to market growth?
  5. Who are the key vendors in the Global Cloud-based Information Governance Market?
  6. What are the market opportunities and threats faced by the vendors in the Global Cloud-based Information Governance Market?
  7. Trending factors influencing the market shares of the Americas, APAC, Europe, and MEA.

Explore Full Report With Detailed TOC Here:

https://www.marketintelligencedata.com/reports/4306259/global-cloud-based-information-governance-market-insights-forecast-to-2028?mode=DiVya

Crucial Elements from the Table of Contents of Global Cloud-based Information Governance Market:

– Cloud-based Information Governance Market Overview

– Global Cloud-based Information Governance Market Competition, Profiles/Analysis, Strategies

– Global Cloud-based Information Governance Capacity, Production, Revenue (Value) by Region (2016-2022)

– Global Cloud-based Information Governance Supply (Production), Consumption, Export, Import by Region (2016-2022)

– Global Cloud-based Information Governance Market Regional Highlights

– Industrial Chain, Sourcing Strategy, and Downstream Buyers

– Marketing Strategy Analysis, Distributors/Traders

– Market Effect Factors Analysis

– Market Decisions for the present scenario

– Global Cloud-based Information Governance Market Forecast (2022-2028)

– Case Studies

– Research Findings and Conclusion

Finally, the Cloud-based Information Governance Market report is the believable source for gaining the market research that will exponentially accelerate your business. The report gives the principle locale, economic situations with the item value, benefit, limit, generation, supply, request, and market development rate and figure, and so on. The Cloud-based Information Governance industry report additionally presents a new task SWOT examination, speculation attainability investigation, and venture return investigation.

Thanks for studying this article, If you need anything more than these then let us know and we will prepare the report according to your requirement.

Customization services available with the report:

-20% Free customization.

-Five Countries can be added as per your choice.

-Five Companies can added as per your choice.

-Free customization upto 40 hours.

-Post-sales support for 1 year from the date of delivery.

Contact Us:

Irfan Tamboli (Head of Sales) - Market Intelligence Data

Phone: + 1704 266 3234

sales@marketintelligencedata.com

COMTEX_411670791/2599/2022-08-05T09:01:36

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Fri, 05 Aug 2022 01:01:00 -0500 en-US text/html https://www.marketwatch.com/press-release/cloud-based-information-governance-market-located-worldwide-trends-and-application-emc-hp-autonomy-ibm-symantec-2022-08-05
Killexams : IT Governance Frameworks No result found, try new keyword!Learn how in our step-by-step guide to war gaming your security infrastructure — from involving the right people to weighing a hypothetical vs. live event. Tue, 02 Feb 2021 02:17:00 -0600 en text/html https://www.csoonline.com/category/it-governance-frameworks/ Killexams : IBM weighs in on Istio joining Google’s Open Usage Commons

Google yesterday announced the creation of the Open Usage Commons (OUC), a new open source organization focused on trademarks. As part of the launch, Google reveled the open-source service mesh project Istio would be joining the organization. 

As a result, long-timer partner and founding member of the Istio project IBM has expressed disappointment in the project joining the Open Usage Commons. According to a blog post by IBM fellow, VP and CTO Jason McGee, the organization “doesn’t live up to the community’s expectation for open governance.”

RELATED CONTENT: 5 reasons to be excited about Istio’s future

Today, Istio is the fourth fastest-growing open-source project on GitHub and Google is the current owner of the Istio trademark.  According to McGee, when IBM and Google originally launched Istio from a merger of Google’s Istio and IBM’s Amalgam8 projects, there was an agreement that the project would be contributed to Cloud Native Computing Foundation. 

“IBM continues to believe that the best way to manage key open source projects such as Istio is with true open governance, under the auspices of a reputable organization with a level playing field for all contributors, transparency for users, and vendor-neutral management of the license and trademarks,” McGee wrote. “Google should reconsider their original commitment and bring Istio to the CNCF.”

IBM previously explained that open governance means that a group of community-elected developers from a project’s contributor base make technical decisions about the project or projects’ future. 

As a result, it reduces the risk of a project being abandoned or not maintained, eliminates single-vendor control, developers can trust that their contributions will be accepted based on merit and what’s best for the project, according to IBM. 

“Without this vendor-neutral approach to project governance, there will be friction within the community of Kubernetes-related projects,” McGee wrote.

Google explained it created the OUC to help projects protect their project identity through programs such as trademark management and usage guidelines. 

The OUC consists of a Board of Directors and it will have advisory members selected by the projects to guide the trademark usage policies.

“The Open Usage Commons intends to serve independent projects as well as projects working with any open-source foundation. This intention was best served by creating a new organization that had neutral foundation affiliation,” Google stated in a post.

Fri, 08 Jul 2022 12:00:00 -0500 en-US text/html https://sdtimes.com/softwaredev/ibm-weighs-in-on-istio-joining-googles-open-usage-commons/
Killexams : How 12 digital execs streamline tech for clinicians

The healthcare industry is increasingly going digital, with artificial intelligence helping diagnose conditions and patients with wearable smart devices being cared for at home.

But are these digital health initiatives actually improving patient care? And are they contributing to an already stressful environment?

Becker's reached out to healthcare leaders on the IT and clinical sides to get their perspectives on how the industry's digital shift is affecting patients and providers. In part one of this two-part series, we asked executives how technology had improved the patient experience.

In this second installment, Becker's asked digital executives: 

How do you believe the clinical side perceives your organization's digital health initiatives, and what are you doing to ensure the initiatives are improving patient care?

Tony Ambrozie. Senior Vice President and Chief Digital and Information Officer for Baptist Health South Florida (Coral Gables): Physicians look after the well-being of patients — including easy access to care and medical information and overall good experiences. For example, getting into and out of an encounter. Digital experiences for consumers help with exactly that if A) they work well for patients and B) they preferably help clinicians in some way or another, but definitely do not create unnecessary challenges and overhead for them.

For digital experiences for physicians, there is a huge legitimate appetite for adoption to simplify their work, especially with EHR burnout and lack of specific collaboration tools. But again, the expectation is that any technology must solve real needs and must work well, which many solutions in the past have not done; as such, there is a healthy "trust but verify" skepticism until proven.

Sure, in the current challenging economic situation, with inflation and labor shortage challenges for all healthcare providers in the U.S., some digital investments have to be prioritized ahead of others, but involving everybody in that balanced prioritization is the way to gain support. 

Tom Andriola. Chief Digital Officer at University of California Irvine Health: The pandemic exposed medical professionals to many types of technology-enabled interactions, many out of necessity. Reactions have varied. Some want to continue those practices, and some want to put them in a drawer and return to the practices of 2019.

For leaders who are attuned to the changes going on in the U.S. healthcare market, they clearly see the opportunity to have a robust strategic discussion around how these new models for virtual care, remote patient-monitoring, "'hospital at home," etc., will impact both the patient experience as well as the healthcare delivery model as we move toward more value-based care contracting.

We're using the opportunity to take a step back, evaluating our digital health initiatives that were implemented during the pandemic and engaging in a process of strategic planning — thinking about the future of our health system strategically and operationally. The conversation has brought together clinical, business and administrative leaders discussing how digitally enabled care fits into our strategic plan for current delivery capabilities and the future services that we see being expanded or coming online. The conversation has always included balancing quality, cost and access.

But post-pandemic healthcare is now also working with new definitions — expectations — around patient preference and experience. We also are evaluating the impact artificial intelligence technologies will have. It has been a good exercise for the organization in that it has forced the discussion and allowed us to reexamine our assumptions.

Zafar Chaudry, MD. Senior Vice President, Chief Digital Officer and CIO at Seattle Children's: At Seattle Children's we follow two paths to ensure clinical, patient, parent and caregiver stakeholders are actively and consistently engaged, and we also measure the satisfaction of our IT services using the Net Promoter Score. Our current NPS is +38.

For clinicians, we have our digital patient access, care and engagement team that works directly with them to provide digital solutions to help with their workflows. We ensure patient care is improving by measuring clinical outcomes using real-time dashboards — built on Microsoft Power BI and underpinned with IBM Netezza data warehousing technology — while multiple improvement projects are driven by our data lakes.

For our patients, parents and caregivers, we have formed an advisory group known as Parent Partners IT Children's Hospital, or PPITCH, to help us define and drive our patient-facing digital strategy. This group, combined with IT staff members, meets regularly to collaborate on how we can Strengthen the digital and technology experience for our patients and families.

Michael Hasselberg, PhD, RN. Chief Digital Health Officer at University of Rochester (N.Y.) Medical Center: One of the benefits of working at an academic medical center is the culture of inquisitiveness and creativity. We encourage our clinicians to bring forward the problems they are encountering, innovate and be part of the solution to advance care.

From the very beginning of our health system's digital transformation strategy, we made it a priority to include clinicians across many different disciplines in the governance process. It is our clinicians who prioritize our digital health initiatives and identify where early wins can be gained. In partnership with our informaticists, the front-line clinicians guide the integration of new digital tools into their current workflows and the electronic health record that they use daily.

Being data-driven and evidence-based are also pillars in academic medicine. To ensure that our digital health initiatives are actually improving patient care, we have invested significant data analytic resources around our strategy, while many of the researchers across our institution are studying the impact of these initiatives on health disparities and clinical outcomes. We have built data dashboards that provide feedback to our clinical, operational and technical teams that generate the insights needed to quickly iterate when we are not meeting our initiative goals. There is no question that healthcare is quickly moving into the digital age, and our clinicians at the University of Rochester Medical Center are engaged and excited for the future of patient care.

William Holland, MD. Senior Vice President of Care Management and Chief Medical Informatics Officer at Banner Health (Phoenix): Over the course of the first two years of the COVID-19 pandemic, we focused heavily on initiatives that helped support our goals of keeping our healthcare workers and patients safe. This included a significant increase in both inpatient and outpatient telehealth deployments, services and usage which allowed patients and clinicians to provide and receive care in the ways that worked best for them.

We are now in a different phase of the pandemic, one where we remain mindful of COVID-19 and also are intensely focused on driving organizational recovery from the impact of the pandemic. Our digital initiatives are transitioning from a COVID-19 focus to one of improving the overall experience of our clinicians through device integration, efficiency through documentation redesign, clinical improvements through advanced analytics and connectedness of care for our patients. Throughout all of this, we have been intentional about involving our front-line clinicians in identifying opportunities, leading and participating in design teams and creating an environment that welcomes open and transparent feedback.

We also leverage a combination of process, balance and outcome measures in each area to ensure that we are making progress in our work and that it has a meaningful and measurable impact on the quality and safety of care we provide.

Claus Jensen. Chief Innovation Officer of Teladoc Health: Traditional healthcare and digital health solutions are increasingly intertwined, and this is a trend that will be accelerating. Instead of asking which type of solution to use when, would it not be a better question to instead ask how we create a hybrid care model that brings the best of both and gives patients choice in a fully integrated and natively whole-person care model?

Our clinical leadership sees our digital health initiatives as the way to reach more people in more meaningful ways. And the way to make sure these initiatives Strengthen patient care is quite simply to work closely together and ensure that we fuse clinical and digital science effectively. We jointly believe that health equity, clinical efficacy and cost-effectiveness can all be addressed by the right blend of care components and resources.

Aaron Miri. Senior Vice President and Chief Digital and Information Officer at Baptist Health (Jacksonville, Fla.): We just went live on a brand new Epic electronic health record which is the direct result of listening to our caregivers on what they need to keep elevating the level of healthcare delivery. That's on top of investments in modernizing the technology stacks across the health system and ensuring that we block and tackle as much as we pursue items like healthcare artificial intelligence, ambient voice technology and other whiz-bang new stuff that are all the rage.

What I appreciate about Baptist Health is the laser focus on delivering the very best patient care possible versus an alternative approach of trying to act like a healthcare product vendor that dabbles in patient care. When you operate with that type of focus where you listen, respect, and engage in providing the highest quality patient care, it's only then that you can strive toward very advanced digital medicine therapies.

Aaron Neinstein, MD. Vice President of Digital Health at University of California San Francisco Health and Senior Director of the UCSF Center for Digital Health Innovation: We view digital health as a set of tools in our care delivery tool kit at UCSF that can help us advance quality care, Strengthen experience, Strengthen access to care and be more efficient in operations, allowing us to serve patients better.

Our digital health team works as part of cross-functional teams that include roles like operations, marketing, design, data science, engineering and product management, giving each team the full complement of expertise and perspectives to design, develop and deploy solutions that will positively impact these outcomes.

For example, for specialty referrals and patient scheduling, by deploying an improved patient web and mobile experience and more efficient back-end operations for handling referrals, we simplified and removed barriers to patients accessing care. Or, in deploying virtual care programs in lung transplant, inflammatory bowel disease and cancer patients receiving chemotherapy, leveraging wearable devices and mobile symptom assessment, our teams aimed to Strengthen patient experience, reduce their need to travel for in-person care and increase the frequency of touchpoints and engagement with care teams, which we hope will have measurable positive impacts on care quality outcomes.

One major advantage of thinking digitally is the goal of thoughtful and deliberate measurement built into every workflow and solution. So, a critical foundation for any of our programs is that each cross-functional team identifies the patient journey, builds in an ability to measure what is happening and continually analyzes those data so that we can see the outcomes and also see which points of friction the patient is experiencing that we can further optimize for. By baking detailed measurements into each program, we can also monitor the data on whether digital health tools are working as we hope, to reduce disparities in care, whether different populations are accessing or using the tools in different ways, or whether measurable gaps in use across populations appear.

Danny Sama. Vice President and Chief Digital Executive at Northwestern Medicine (Chicago): Our clinicians see the great opportunity for digital health to positively impact patients and themselves. However, they are wary about a potential added burden to their clinical workflows. We are constantly thinking about where and how to integrate digital tech into clinician workflows as seamlessly as possible. And every digital solution we consider is grounded in a value proposition to both patients and clinicians in the form of improved experience, increased efficiency or reduced risk.

Eric Smith. Senior Vice President and Chief Digital Officer at Memorial Hermann Health System (Houston): Our clinical providers are eager to adopt our digital initiatives, from online scheduling to virtual appointments — as long as the technology truly makes the experience better, easier and more efficient for our patients. With that in mind, we're working on initiatives designed to help patients have seamless, frustration-free experiences.

One of these is a platform that will allow patients to interact with us more digitally — by text, for instance — to remove the friction they might feel when trying to get information. We'll be able to remind them about preventive screenings, wellness exams and other regular appointments, encouraging them to schedule this routine care and follow up when necessary.

Finally, we're continuing to expand the options for patients to use our enhanced virtual health services when it makes sense for them to do so — helping them get the care they need in the way that is most convenient and readily available to them.

Jason Szczuka. Chief Digital Officer for Bon Secours Mercy Health (Cincinnati): BSMH's digital business (Accrete Health Partners) is unique in that we closely partner with our clinical teams to prioritize, validate and scale our digital initiatives so that we can ensure we solve existing problems, complement our clinicians' workflows and facilitate better experiences for our patients. We are proud of how our clinicians positively perceive and lean into our initiatives.

Prat Vemana. Senior Vice President and Chief Digital Officer at Kaiser Permanente (Oakland, Calif.): At Kaiser Permanente, we are building on our strong foundation as an innovator and continuously expanding our digital platform to deliver more personalized, seamless experiences for our 12.6 million members. Our clinical teams at Kaiser Permanente view our digital health initiatives as an integrated effort. Digital is applied in every area of our organization — from consumer engagement, physician workflows and optimization, to the clinical care setting. Our physicians and clinical teams are involved in designing our digital patient experiences and applying digital to their work in order to provide exceptional, integrated patient care.

To ensure our digital health tools are improving patient care, stakeholders from both the digital and clinical sides are involved in the entire process when developing digital experiences for our members. Kaiser Permanente's unique integrated model facilitates this collaboration and ensures that the right experts are at the table to think about the patient digital experience holistically. Program management, product managers, experience designers, clinical experts and health plan experts are involved from strategy to execution, empowering them to design and deliver successful digital solutions for both the patient and physician. The partnership and collaboration between these groups is essential to ensuring our members' needs and preferences are addressed by digital tools.

Kaiser Permanente is using artificial intelligence and machine learning technology to Strengthen the health outcomes for our members and patients. We are ahead of the curve on delivering machine learning-enabled solutions at the point of care. We gain rapid adoption of these solutions because we have cultivated strong relationships between our physicians, members and patients.

For example, our Advance Alert Monitor tool analyzes electronic health record data for medical-surgical inpatients, proactively identifies those with a high likelihood of clinical deterioration and activates a rapid response care team to develop a care plan. This is completed through a predictive model that uses algorithms created from machine learning and data from more than 1.5 million patients. A 2020 Kaiser Permanente study showed that our Advanced Alert Monitor tool is associated with statistically significant decreases in mortality [with between 550 to 3,020 lives saved over four years], hospital length of stay and intensive care unit length of stay, which shows us the positive impact our digital tools have in patient care and outcomes.

Tue, 09 Aug 2022 03:22:00 -0500 en-gb text/html https://www.beckershospitalreview.com/digital-health/how-12-digital-execs-streamline-tech-for-clinicians.html
Killexams : Making the DevOps Pipeline Transparent and Governable

Subscribe on:

Transcript

Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I'm sitting down with David Williams from Quali. David, welcome. Thanks for taking the time to talk to us today.

David Williams: Thanks, Shane. It's great to be here.

Shane Hastie: Probably my first starting point for most of these conversations is who's David?

Introductions [00:23]

David, he's a pretty boring character, really. He's been in the IT industry all his life, so there's only so many parties you can go and entertain people with that subject now. But I've been working since I first went to school. My first jobs were working in IT operations in a number of financial companies. I started at the back end. For those of you who want to know how old I was, I remember a time when printing was a thing. And so decorating was my job, carrying tapes, separating print out, doing those sort of things. So really I got a very grassroots level of understanding about what technology was all about, and it was nowhere near as glamorous as I've been full to believe. So I started off, I'd say, working operations. I've worked my way through computer operations systems administration, network operations. So I used to be part of a NOC team, customer support.

David Williams: I did that sort of path, as low as you can get in the ladder, to arguably about a rung above. And then what happened over that period of time was I worked a lot with distributed systems, lights out computing scenarios, et cetera and it enabled me to get more involved in some of the development work that was being done, specifically to manage these new environments, specifically mesh computing, clusters, et cetera. How do you move workloads around dynamically and how does the operating system become much more aware of what it's doing and why? Because obviously, it just sees them as workloads but needed to be smarter. So I got into development that way, really. I worked for Digital Equipment in its heyday, working on clusters and part of the team that was doing the operating system work. And so that, combined with my knowledge of how people were using the tech, being one of the people that was once an operations person, it enabled me as a developer to have a little bit of a different view on what needed to be done.

And that's what really motivated me to excel in that area, because I wanted to make sure that a lot of the things that were being built could be built in support of making operations simpler, making the accountability of what was going on more accountable to the business, to enable the services to be a little more transparent in how IT was using them around. So that throughout my career, luckily for me, the tech industry reinvents itself in a very similar way every seven years. So I just have to wait seven years to look like one of the smart guys again. So that's how I really got into it from the get go.

Shane Hastie: So that developer experience is what we'd call thinking about making it better for developers today. What are the key elements of this developer experience for us?

The complexity in the developer role today [02:54]

David Williams: When I was in development, the main criteria that I was really responsible for was time. It was around time and production rates. I really had no clue why I was developing the software. Obviously, I knew what application I was working on and I knew what it was, but I never really saw the results. So over the years, I wasn't doing it for a great amount of time, to be honest with you. Because when I started looking at what needed to be done, I moved quite quickly from being a developer into being a product manager, which by the way, if you go from development to product management, it's not exactly a smooth path. But I think it was something that enabled me to be a better product manager at the time, because then I understood the operations aspects, I was a developer and I understood what it was that made the developer tick because that's why I did it.

It was a great job to create something and work on it and actually show the results. And I think over the years, it enabled me to look at the product differently. And I think that as a developer today, what developers do today is radically more advanced than what I was expected to do. I did not have continuous delivery. I did not really have a continuous feedback. I did not have the responsibility for testing whilst developing. So there was no combined thing. It was very segmented and siloed. And I think over the years, I've seen what I used to do as an art form become extremely sophisticated with a lot more requirements of it than was there. And I think for my career, I was a VP of Products at IBM Tivoli, I was a CTO at BMT software, and I worked for CA Technology prior to its acquisition by Broadcom, where I was the Senior Vice President of Product Strategy.

But in all those jobs, it enabled me to really understand the value of the development practices and how these practices can be really honed in, in support between the products and the IT operations world, as well as really more than anything else, the connection between the developer and the consumer. That was never part of my role. I had no clue who was using my product. And as an operations person, I only knew the people that were unhappy. So I think today's developer is a much more... They tend to be highly skilled in a way that I was not because coding is part of their role. Communication, collaboration, the integration, the cloud computing aspects, everything that you have to now include from an infrastructure is significantly in greater complexity. And I'll summarize by saying that I was also an analyst for Gartner for many years and I covered the DevOps toolchains.

And the one thing I found out there was there isn't a thing called DevOps that you can put into a box. It's very much based upon a culture and a type of company that you're with. So everybody had their interpretation of their box. But one thing was very common, the complexity in all cases was significantly high and growing to the point where the way that you provision and deliver the infrastructure in support of the code you're building, became much more of a frontline job than something that you could accept as being a piece of your role. It became a big part of your role. And that's what really drove me towards joining Quali, because this company is dealing with something that I found as being an inhibitor to my productivity, both as a developer, but also when I was also looking up at the products, I found that trying to work out what the infrastructure was doing in support of what the code was doing was a real nightmare.

Shane Hastie: Let's explore that when it comes, step back a little bit, you made the point about DevOps as a culture. What are the key cultural elements that need to be in place for DevOps to be effective in an organization?

The elements of DevOps culture [06:28]

David Williams: Yeah, this is a good one. When DevOps was an egg, it really was an approach that was radically different from the norm. And what I mean, obviously for people that remember it back then, it was the continuous... Had nothing to do with Agile. It was really about continuous delivery of software into the environment in small chunks, microservices coming up. It was delivering very specific pieces of code into the infrastructure, continuously, evaluating the impact of that release and then making adjustments and change in respect to the feedback that gave you. So the fail forward thing was very much an accepted behavior, what it didn't do at the time, and it sort of glossed over it a bit, was it did remove a lot of the compliance and regulatory type of mandatory things that people would use in the more traditional ways of developing and delivering code, but it was a fledging practice.

And from that base form, it became a much, much bigger one. So really what that culturally meant was initially it was many, many small teams working in combination of a bigger outcome, whether it was stories in support of epics or whatever the response was. But I find today, it has a much bigger play because now it does have Agile as an inherent construct within the DevOps procedures, so you've got the ability to do teamwork and collaboration and all the things that Agile defines, but you've also got the continuous delivery part of that added on top, which means that at any moment in time, you're continually putting out updates and changes and then measuring the impact. And I think today's challenge is really the feedback loop isn't as clear as it used to be because people are starting to use it for a serious applications delivery now.

The consumer, which used to be the primary recipient, the lamp stacks that used to be built out there have now moved into the back end type of tech. And at that point, it gets very complex. So I think that the complexity of the pipeline is something that the DevOps team needs to work on, which means that even though collaboration and people working closely together, it's a no brainer in no matter what you're doing, to be honest. But I think that the ability to understand and have a focused understanding of the outcome objective, no matter who you are in the DevOps pipeline, that you understand what you're doing and why it is, and everybody that's in that team understands their contribution, irrespective of whether they talk to each other, I think is really important, which means that technology supporting that needs to have context.

I need to understand what the people around me have done to be code. I need to know what stage it's in. I need to understand where it came from and who do I pass it to? So all that needs to be not just the cultural thing, but the technology itself also needs to adhere to that type of practice.

Shane Hastie: One of the challenges or one of the pushbacks we often hear about is the lack of governance or the lack of transparency for governance in the DevOps space. How do we overcome that?

Governance in DevOps [09:29]

David Williams: The whole approach of the DevOps, initially, was to think about things in small increments, the bigger objective, obviously being the clarity. But the increments were to provide lots and lots of enhancements and advances. When you fragmented in that way and supply the ability for the developer to make choices on how they both code and provision infrastructure, it can sometimes not necessarily lead to things being unsecure or not governed, but it means that there's different security and different governance within a pipeline. So where the teams are working quite closely together, that may not automatically move if you've still got your different testing team. So if your testing is not part of your development code, which in some cases it is, some cases it isn't, and you move from one set of infrastructure, for example, that supports the code to another one, they might be using a completely different set of tooling.

They might have different ways with which to measure the governance. They might have different guardrails, obviously, and everything needs to be accountable to change because financial organizations, in fact, most organizations today, have compliance regulations that says any changes to any production, non-production environment, in fact, in most cases, requires accountability. And so if you're not reporting in a, say, consistent way, it makes the job of understanding what's going on in support of compliance and governance really difficult. So it really requires governance to be a much more abstract, but end to end thing as opposed to each individual stay as its own practices. So governance today is starting to move to a point where one person needs to see the end to end pipeline and understand what exactly is going on? Who is doing what, where and how? Who has permissions and access? What are the configurations that are changing?

Shane Hastie: Sounds easy, but I suspect there's a whole lot of... Again, coming back to the culture, we're constraining things that for a long time, we were deliberately releasing.

Providing freedom withing governance constraints [11:27]

David Williams: This is a challenge. When I was a developer of my choice, it's applicable today. When I heard the word abstract, it put the fear of God into me, to be honest with you. I hated the word abstract. I didn't want anything that made my life worse. I mean, being accountable was fine. When I used to heard the word frameworks and I remember even balking at the idea of a technology that brought all my coding environment into one specific view. So today, nothing's changed. A developer has got to be able to use the tools that they want to use and I think that the reason for that is that with the amount of skills that people have, we're going to have to, as an industry, get used to the fact that people have different skills and different focuses and different preferences of technology.

And so to actually mandate a specific way of doing something or implementing a governance engine that inhibits my ability to innovate is counterproductive. It needs to have that balance. You need to be able to have innovation, freedom of choice, and the ability to use the technology in the way that you need to use to build the code. But you also need to be able to provide the accountability to the overall objective, so you need to have that end to end view on what you're doing. So as you are part of a team, each team member should have responsibility for it and you need to be able to provide the business with the things that it needs to make sure that nothing goes awry and that there's nothing been breached. So no security issues occurring, no configurations are not tracked. So how do you do that?

Transparency through tooling [12:54]

David Williams: And as I said, that's what drove me towards Quali, because as a company, the philosophy was very much on the infrastructure. But when I spoke to the CEO of the company, we had a conversation prior to my employment here, based upon my prior employer, which was a company that was developing toolchain products to help developers and to help people release into production. And the biggest challenge that we had there was really understanding what the infrastructure was doing and the governance that was being put upon those pieces. So think about it as you being a train, but having no clue about what gauge the track is at any moment in time. And you had to put an awful lot of effort into working out what is being done underneath the hood. So what I'm saying is that there needed to be something that did that magic thing.

It enabled you with a freedom of choice, captured your freedom of choice, translated it into a way that adhered it to a set of common governance engines without inhibiting your ability to work, but also provided visibility to the business to do governance and cost control and things that you can do when you take disparate complexity, translate it and model it, and then actually provide that consistency view to the higher level organizations that enable you to prove that you are meeting all the compliance and governance rules.

Shane Hastie: Really important stuff there, but what are the challenges? How do we address this?

The challenges of complexity [14:21]

David Williams: See, the ability to address it and to really understand why the problems are occurring. Because if you talk to a lot of developers today and say, “How difficult is your life and what are the issues?", the conversation you'll have with a developer is completely different than the conversation you'll have with a DevOps team lead or a business unit manager, in regards to how they see applications being delivered and coded. So at the developer level, I think the tools that are being developed today, so the infrastructure providers, for example, the application dictates what it needs. It's no longer, I will build an infrastructure and then you will layer the applications on like you used to be able to do. Now what happens is applications and the way that they behave is actually defining where you need to put the app, the tools that are used to both create it and manage it from the Dev and the Op side.

So really what the understanding is, okay, that's the complexity. So you've got infrastructure providers, the clouds, so you've got different clouds. And no matter what you say, they're all different impact, serverless, classic adoption of serverless, is very proprietary in nature. You can't just move one serverless environment from one to another. I'm sure there'll be a time when you might be able to do that, but today it's extremely proprietary. So you've got the infrastructure providers. Then you've got the people that are at the top layer. So you've got the infrastructure technology layer. And that means that on top of that, you're going to have VMs or containers or serverless something that sits on your cloud. And that again is defined by what the application needs, in respect to portability, where it lives, whether it lives in the cloud or it's partly an edge, wherever you want to put it.

And then of course on top of that, you've got all the things that you can use that enables you to instrument and code to those things. So you've got things like Helm charts for containers, and you've got a Terraform where developing the infrastructure as code pieces, or you might be using Puppet or Chef or Ansible. So you've got lots of tools out there, including all the other tools from the service providers themselves. So you've got a lot of the instrumentation. And so you've got that stack. So the skills you've got, so you've got the application defining what you want to do, the developer chooses how they use it in support of the application outcome. So really what you want to be able to do is have something that has a control plane view that says, okay, you can do whatever you want.

Visibility into the pipeline [16:36]

David Williams: These are the skills that you need. But if people leave, what do you do? Do you go and get all the other developers to try and debug and translate what the coding did? Wouldn't it be cool instead to have a set of tech that you could understand what the different platform configuration tools did and how they applied, so look at it in a much more consistent form. Doesn't stop them using what they want, but the layer basically says, "I know, I've discovered what you're using. I've translated how it's used, and I'm now enabling you to model it in a way that enables everybody to use it." So the skills thing is always going to exist. The turnover of people is also very extremely, I would say, more damaging than the skills because people come and go quite freely today. It's the way that the market is.

And then there's the accountability. What do the tools do and why do they do it? So you really want to also deal with the governance piece that we mentioned earlier on, you also want to provide context. And I think that the thing that's missing when you build infrastructure as code and you do all these other things is even though you know why you're building it and you know what it does to build it, that visibility that you're going to have a conversation with the DevOps lead and the business unit manager, wouldn't it be cool if they could actually work out that what you did is in support of what they need. So it has the application ownership pieces, for example, a business owner. These are the things that we provide context. So as each piece of infrastructure is developed through the toolchain, it adds context and the context is consistent.

So as the environments are moved in a consistent way, you actually have context that says this was planned, this was developed, and this is what it was done for. This is how it was tested. I'm now going to leverage everything that the developer did, but now add my testing tools on top. And I'm going to move that in with the context. I'm now going to release the technology until I deploy, release it, into either further testing or production. But the point is that as things get provisioned, whether you are using different tools at different stages, or whether you are using different platforms with which to develop and then test and then release, you should have some view that says all these things are the same thing in support of the business outcome and that is all to do with context. So again, why I joined Quali was because it provides models that provide that context and I think context is very important and it's not always mentioned.

As a coder, I used to write lots and lots of things in the code that gave people a clue on what I was doing. I used to have revision numbers. But outside of that and what I did to modify the code within a set of files, I really didn't have anything about what the business it was supporting it. And I think today with the fragmentation that exists, you've got to supply people clues on why infrastructure is being deployed, used, and retired, and it needs to be done in our life cycle because you don't want dormant infrastructure sitting out there. So you've got to have it accountable and that's where the governance comes in. So the one thing I didn't mention earlier on was you've got to have ability to be able to work out what you're using, why it's being used and why is it out there absorbing capacity and compute, costing me money, and yet no one seems to be using it.

Accountability and consistency without constraining creativity and innovation [19:39]

David Williams: So you want to be out of accountability and with context in it, that at least gives you information that you can rely back to the business to say, "This is what it cost to actually develop the full life cycle of our app, in that particular stage of the development cycle." So it sounds very complex because it is, but the way to simplify it is really to not abstract it, but consume it. So you discover it, you work out what's going on and you create a layer of technology that can actually provide consistent costing, through consistent tagging, which you can do with the governance, consistent governance, so you're actually measuring things in the same way, and you're providing consistency through the applications layer. So you're saying all these things happen in support, these applications, et cetera. So if issues occur, bugs occur, when it reports itself integrated with the service management tools, suddenly what you have there is a problem that's reported in response to an application, to a release specific to an application, which then associates itself with a service level, which enables you to actually do report and remediation that much more efficiently.

So that's where I think we're really going is that the skills are always going to be fragmented and you shouldn't inhibit people doing what they need. And I think the last thing I mentioned is you should have the infrastructure delivered in the way you want it. So you've got CLIs, if that's a preferred way, APIs to call it if you want to. But for those who don't have the skills, it's not a developer only world if I'm an abstraction layer and I'm more of an operations person or someone that doesn't have the deep diving code skills, I should need to see a catalog of available environments built by coding, built by the people that actually have that skill. But I should be able to, in a single click, provision an environment in support of an application requirement that doesn't require me to be a coder.

So that means that you can actually share things. So coders can code, that captures the environment. If that environment is needed by someone that doesn't have the skills, but it's consistently, because it has all that information in it, I can hit a click. It goes and provisions that infrastructure and I haven't touched code at all. So that's how you see the skills being leveraged. And you just got to accept the fact that people will be transient going forward. They will work from company to company, project to project, and that skills will be diverse, but you've got to provide a layer with which that doesn't matter.

Shane Hastie: Thank you very much. If people want to continue the conversation, where do they find you?

David Williams: They can find me in a number of places. I think the best place is I'm at Quali. It is David.W@Quali.com. I'm the only David W., which is a good thing, so you'll find me very easily. Unlike a plane I got the other day, where I was the third David Williams on the plane, the only one not to get an upgrade. So that's where you can find me. I'm also on LinkedIn, Dave Williams on LinkedIn can be found under Quali and all the companies that I've spoken to you about. So as I say, I'm pretty easy to find. And I would encourage, by the way, anybody to reach out to me, if they have any questions about what I've said. It'd be a great conversation.

Shane Hastie: Thanks, David. We really appreciate it.

David Williams: Thank you, Shane.

Mentioned

. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sun, 10 Jul 2022 13:55:00 -0500 en text/html https://www.infoq.com/podcasts/making-devops-pipeline-transparent/
Killexams : Explainable AI Is Trending And Here’s Why

According to the 2022 IBM Institute for Business Value study on AI Ethics in Action, building trustworthy Artificial Intelligence (AI) is perceived as a strategic differentiator and organizations are beginning to implement AI ethics mechanisms.

Seventy-five percent of respondents believe that ethics is a source of competitive differentiation. More than 67% of respondents who view AI and AI ethics as important indicate that their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.

The survey showed that 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018, but less than a quarter of responding organizations have operationalized AI ethics. Less than 20% of respondents strongly agreed that their organization's practices and actions match (or exceed) their stated principles and values.

Peter Bernard, CEO of Datagration, says that understanding AI gives companies an advantage, but Bernard adds that explainable AI allows businesses to optimize their data.

"Not only are they able to explain and understand the AI/ML behind predictions, but when errors arise, they can understand where to go back and make improvements," said Bernard. "A deeper understanding of AI/ML allows businesses to know whether their AI/ML is making valuable predictions or whether they should be improved."

Bernard believes this can ensure incorrect data is spotted early on and stopped before decisions are made.

Avivah Litan, vice president and distinguished analyst at Gartner, says that explainable AI also furthers scientific discovery as scientists and other business users can explore what the AI model does in various circumstances.

"They can work with the models directly instead of relying only on what predictions are generated given a certain set of inputs," said Litan.

But John Thomas, Vice President and Distinguished Engineer in IBM Expert Labs, says at its very basic level, explainable AI are the methods and processes for helping us understand a model's output. "In other words, it's the effort to build AI that can explain to designers and users why it made the decision it did based on the data that was put into it," said Thomas.

Thomas says there are many reasons why explainable AI is urgently needed.

"One reason is model drift. Over time as more and more data is fed into a given model, this new data can influence the model in ways you may not have intended," said Thomas. "If we can understand why an AI is making certain decisions, we can do much more to keep its outputs consistent and trustworthy over its lifecycle."

Thomas adds that at a practical level, we can use explainable AI to make models more accurate and refined in the first place. "As AI becomes more embedded in our lives in more impactful ways, [..] we're going to need not only governance and regulatory tools to protect consumers from adverse effects, we're going to need technical solutions as well," said Thomas.

"AI is becoming more pervasive, yet most organizations cannot interpret or explain what their models are doing," said Litan. "And the increasing dependence on AI escalates the impact of mis-performing AI models with severely negative consequences," said Litan.

Bernard takes it back to a practical level, saying that explainable AI [..] creates proof of what senior engineers and experts "know" intuitively and explaining the reasoning behind it simultaneously. "Explainable AI can also take commonly held beliefs and prove that the data does not back it up," said Bernard.

"Explainable AI lets us troubleshoot how an AI is making decisions and interpreting data is an extremely important tool in helping us ensure AI is helping everyone, not just a narrow few," said Thomas.

Hiring is an example of where explainable AI can help everyone.

Thomas says hiring managers deal with all kinds of hiring and talent shortages and usually get more applications than they can read thoroughly. This means there is a strong demand to be able to evaluate and screen applicants algorithmically.

"Of course, we know this can introduce bias into hiring decisions, as well as overlook a lot of people who might be compelling candidates with unconventional backgrounds," said Thomas. "Explainable AI is an ideal solution for these sorts of problems because it would allow you to understand why a model rejected a certain applicant and accepted another. It helps you make your make model better.”

Making AI trustworthy

IBM's AI Ethics survey showed that 85% of IT professionals agree that consumers are more likely to choose a company that's transparent about how its AI models are built, managed and used.

Thomas says explainable AI is absolutely a response to concerns about understanding and being able to trust AI's results.

"There's a broad consensus among people using AI that you need to take steps to explain how you're using it to customers and consumers," said Thomas. "At the same time, the field of AI Ethics as a practice is relatively new, so most companies, even large ones, don't have a Head of AI ethics, and they don't have the skills they need to build an ethics panel in-house."

Thomas believes it's essential that companies begin thinking about building those governance structures. "But there also a need for technical solutions that can help companies manage their use of AI responsibly," said Thomas.

Driven by industry, compliance or everything?

Bernard points to the oil and gas industry as why explainable AI is necessary.

"Oil and gas have [..] a level of engineering complexity, and very few industries apply engineering and data at such a deep and constant level like this industry," said Bernard. "From the reservoir to the surface, every aspect is an engineering challenge with millions of data points and different approaches."

Bernard says in this industry, operators and companies still utilize spreadsheets and other home-grown systems-built decades ago. "Utilizing ML enables them to take siloed knowledge, Strengthen it and create something transferrable across the organization, allowing consistency in decision making and process."

"When oil and gas companies can perform more efficiently, it is a win for everyone," said Bernard. "The companies see the impact in their bottom line by producing more from their existing assets, lowering environmental impact, and doing more with less manpower."

Bernard says this leads to more supply to help ease the burden on demand. "Even modest increases like 10% improvement in production can have a massive impact in supply, the more production we have [..] consumers will see relief at the pump."

But Litan says the trend toward explainable AI is mainly driven by regulatory compliance.

In a 2021 Gartner survey, AI in Organizations reported that regulatory compliance is the top reason privacy, security and risk are barriers to AI implementation.

"Regulators are demanding AI model transparency and proof that models are not generating biased decisions and unfair 'irresponsible' policies," said Litan. "AI privacy, security and/or risk management starts with AI explainability, which is a required baseline."

Litan says Gartner sees the biggest uptake of explainable AI in regulated industries like healthcare and financial services. "But we also see it increasingly with technology service providers that use AI models, notably in security or other scenarios," said Litan.

Litan adds that another reason explainable AI is trending is that organizations are unprepared to manage AI risks and often cut corners around model governance. "Organizations that adopt AI trust, risk and security management – which starts with inventorying AI models and explaining them – get better business results," adds Litan.

But IBM's Thomas doesn't think you can parse the uptake of explainable AI by industry.

"What makes a company interested in explainable AI isn't necessarily the industry they're in; they're invested in AI in the first place," said Thomas. "IT professionals at businesses deploying AI are 17% more likely to report that their business values AI explainability. Once you get beyond exploration and into the deployment phase, explaining what your models are doing and why quickly becomes very important to you."

Thomas says that IBM sees some compelling use cases in specific industries starting with medical research.

"There is a lot of excitement about the potential for AI to accelerate the pace of discovery by making medical research easier," said Thomas. "But, even if AI can do a lot of heavy lifting, there is still skepticism among doctors and researchers about the results."

Thomas says explainable AI has been a powerful solution to that particular problem, allowing researchers to embrace AI modeling to help them solve healthcare-related challenges because they can refine their models, control for bias and monitor the results.

"That trust makes it much easier for them to build models more quickly and feel comfortable using them to inform their care for patients," said Thomas.

IBM worked with Highmark Health to build a model using claims data to model sepsis and COVID-19 risk. But again, Thomas adds that because it's a tool for refining and monitoring how your AI models perform, explainable AI shouldn't be restricted to any particular industry or use case.

"We have airlines who use explainable AI to ensure their AI is doing a good job predicting plane departure times. In financial services and insurance, companies are using explainable AI to make sure they are making fair decisions about loan rates and premiums," said Thomas. "This is a technical component that will be critical for anyone getting serious about using AI at scale, regardless of what industry they are in."

Guard rails for AI ethics

What does the future look like with AI ethics and explainable AI?

Thomas says the hope is that explainable AI will spread and see adoption because that will be a sign companies take trustworthy AI, both the governance and the technical components, very seriously.

He also sees explainable AI as essential guardrails for AI Ethics down the road.

"When we started putting seatbelts in cars, a lot more people started driving, but we also saw fewer and less severe accidents," said Thomas. "That's the obvious hope - that we can make the benefits of this new technology much more widely available while also taking the needed steps to ensure we are not introducing unanticipated consequences or harms."

One of the most significant factors working against the adoption of AI and its productivity gains is the genuine need to address concerns about how AI is used, what types of data are being collected about people, and whether AI will put them out of a job.

But Thomas says that worry is contrary to what’s happening today. "AI is augmenting what humans can accomplish, from helping researchers conduct studies faster to assisting bankers in designing fairer and more efficient loans to helping technicians inspect and fix equipment more quickly," said Thomas. "Explainable AI is one of the most important ways we are helping consumers understand that, so a user can say with a much greater degree of certainty that no, this AI isn't introducing bias, and here's exactly why and what this model is really doing."

One tangible example IBM uses is AI Factsheets in their IBM Cloud Pak for Data. IBM describes the factsheets as 'nutrition labels' for AI, which allows them to list the types of data and algorithms that make up a particular in the same way a food item lists its ingredients.

"To achieve trustworthy AI at scale, it takes more than one company or organization to lead the charge,” said Thomas. “AI should come from a diversity of datasets, diversity in practitioners, and a diverse partner ecosystem so that we have continuous feedback and improvement.”

Wed, 27 Jul 2022 12:00:00 -0500 Jennifer Kite-Powell en text/html https://www.forbes.com/sites/jenniferhicks/2022/07/28/explainable-ai-is--trending-and-heres-why/
Killexams : Why AI is critical to meet rising ESG demands

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Could artificial intelligence (AI) help companies meet growing expectations for environmental, social and governance (ESG) reporting? 

Certainly, over the past couple of years, ESG issues have soared in importance for corporate stakeholders, with increasing demands from investors, employees and customers. According to S&P Global, in 2022 corporate boards and government leaders “will face rising pressure to demonstrate that they are adequately equipped to understand and oversee ESG issues — from climate change to human rights to social unrest.”

ESG investing, in particular, has been a big part of this boom: Bloomberg Intelligence found that ESG assets are on track to exceed $50 trillion by 2025, representing more than a third of the projected $140.5 trillion in total global assets under management. Meanwhile, ESG reporting has become a top priority that goes beyond ticking off regulatory boxes. It’s used as a tool to attract investors and financing, as well as to meet expectations of today’s consumers and employees.  

But according to a recent Oracle ESG global study, 91% of business leaders are currently facing major challenges in making progress on sustainability and ESG initiatives. These include finding the right data to track progress, and time-consuming manual processes to report on ESG metrics.

“A lot of the data that needs to be collected either doesn’t exist yet or needs to come from many systems,” said Sem J. de Spa, senior manager of digital risk solutions at Deloitte. “It’s also way more complex than just your company, because it’s your suppliers, but also the suppliers of your suppliers.” 

ESG data challenges driving use of AI

That is where AI has increasingly become part of the ESG equation. AI can help manage data, glean data insights, operationalize data and report against it, said Christina Shim, VP of strategy and sustainability, AI applications software at IBM. 

“We need to make sure that we’re gathering the mass amounts of data when they’re in completely different silos, that we’re leveraging that data to Strengthen operations within the business, that we’re reporting that data to a variety of stakeholders and against a very confusing landscape of ESG frameworks,” she said. 

According to Deloitte, although a BlackRock survey found that 92% of S&P companies were reporting ESG metrics by the end of 2020, 53% of global respondents cited “poor quality or availability of ESG data and analytics” and another 33% cited “poor quality of sustainability investment reporting” as the two biggest barriers to adopting sustainable investing. 

Making progress is a must, experts say. Increasingly, these ESG and sustainability commitments are no longer simply nice to have,” said Shim. “It’s really becoming kind of like a basis of what organizations need to be focused on and there are increasingly higher standards that have to be integrated into the operations of all businesses,” she explained. 

“The challenge is huge, especially as new regulations and standards emerge and ESG requirements are under more scrutiny,” said De Spa. This has led to hundreds of technology vendors flooding the market that use AI to help tackle these issues. “We need all of them, at least a lot of them, to solve these challenges,” he said.

The human-AI ESG connection

On top of the operational challenges around ESG, the Oracle study found 96% of business leaders admit human bias and emotion often distract from the end ESG goals. In fact, 93% of business leaders say they would trust a bot over a human to make sustainability and social decisions. 

“We have people who are coming up now who are hardwired for ESG,” Pamela Rucker, CIO advisor, instructor for Harvard Professional Development, who helped put together the Oracle study. “The idea that they would trust a computer isn’t different for them. They already trust a computer to guide them to work, to supply them directions, to tell them where the best prices are.” 

But, she added, humans can work with technology to create more meaningful change and the survey also found that business leaders believe there is still a place for humans in ESG efforts, including managing making changes (48%), educating others (46%), and making strategic decisions (42%). 

“Having a machine that might be able to sift through some of that data will allow the humans to come in and look at places where they can add some context around places where we might have some ambiguity, or we might have places where there’s an opportunity,” said Rucker. “AI gives you a chance to see more of that data, and you can spend more time trying to come up with the insights.” 

How companies can get started with AI and ESG

Seth Dobrin, chief AI officer at IBM, told VentureBeat that companies should get started now on using AI to harness ESG data. “Don’t wait for additional regulations to come,” he said. 

Getting a handle on data is essential as companies begin their journey towards bringing AI technologies into the mix. “You need a baseline to understand where you are, because you can make all the goals and imperatives, you can commit to whatever you want, but until you know where you are, you’re never gonna figure out how to get to where you need to get to,” he said. 

Dobrin said he also sees organizations moving from a defensive, risk management posture around ESG to a proactive approach that is open to AI and other technologies to help. 

“It’s still somewhat of a compliance exercise, but it’s shifting,” he said. “Companies know they need to get on board and think proactively so that they are considered a thought leader in the space and not just a laggard doing the bare minimum.” 

One of the key areas IBM is focusing on, he added, is helping clients connect their ESG data and the data monitoring with the genuine operations of the business. 

“If we’re thinking about business facilities and assets, infrastructure and supply chain as something that’s relevant across industries, all the data that’s being sourced needs to be rolled up and integrated with data and process flows within the ESG reporting and management piece,” he said. “You’re sourcing the data from the business.” 

Deloitte works with Signal AI on ESG efforts

Deloitte recently partnered with Signal AI, which offers AI-powered media intelligence, to help the consulting firm’s clients spot and address supplier risks related to ESG issues. 

“With the rise of ESG and as businesses are navigating a more complex environment than ever before, the world has become awash in unstructured data,” said David Benigson, CEO of Signal AI. “Businesses may find themselves constantly on the back foot, responding to these issues reactively rather than having the sort of data and insights at their fingertips to be at the forefront.” 

The emergence of machine learning and AI, he said, can fundamentally address those challenges. “We can transform data into structured insights that help business leaders and organizations better understand their environment and get ahead of those risks, those threats faster, but also spot those opportunities more efficiently too – providing more of an outside-in perspective on issues such as ESG.” 

He pointed to recent backlash around “greenwashing,” including by Elon Musk (who called ESG a “scam” because Tesla was removed from S&P 500’s ESG Index). “There are accusations that organizations are essentially marking their own homework when it comes to sorting their performance and alignment against these sorts of ESG commitments,” he said. “At Signal, we provide the counter to that – we don’t necessarily analyze what the company says they’re going to do, but what the world thinks about what that company is doing and what that company is actually doing in the wild.” 

Deloitte’s de Spa said the firm uses Signal AI for what it calls a “responsible value chain” – basically, supplier risk management. 

“For example, a sustainable organization that cleans oceans and rivers from all kinds of waste asked us to help them get more insight into their own value chain,” he said. “They have a small number of often small suppliers they are dependent on and you cannot easily keep track of what they’re doing.” With Signal AI, he explained, Deloitte can follow what is happening with those companies to identify if there are any risks – if they are no longer able to deliver, for example, if there is a scandal that puts them out of business, or if the company is causing issues related to sustainability.” 

In one case, Deloitte discovered a company that was not treating their workers fairly. “You can definitely fight greenwashing because you can see what is going on,” he said. “You can leverage millions of sources to identify what is really happening.” 

ESG will need AI and humans going forward

As sustainability and other ESG-related regulations begin to proliferate around the world, AI and smart technology will continue to play a crucial role, said Deloitte’s de Spa. “It’s not just about carbon, or even having a responsible value chain that has a net zero footprint,” he said. “But it’s also about modern slavery and farmers and other social types of things that companies will need to report on in the next few years.” 

Going forward, a key factor will be how to connect and integrate data together using AI, said IBM’s Dobrin. “Many offer a carbon piece or sell AI just for energy efficiency or supply chain transparency,” he said. “But you need to connect all of it together in a one-stop-shop, that will be a total game-changer in this space.” 

No matter what, said Rucker, there is certainly going to be more for AI-driven tools to measure when it comes to ESG. “One of the reasons I get excited about this is because it’s not just about a carbon footprint anymore, and those massive amounts of data mean you’re going to have to have heavy lifting done by a machine,” she said. “I see an ESG future where the human needs the machine and the machine needs the human. I don’t think that they can exist without each other.” 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Wed, 13 Jul 2022 08:00:00 -0500 Sharon Goldman en-US text/html https://venturebeat.com/business/why-ai-is-critical-to-meet-rising-esg-demands/
Killexams : 5 Companies That Came To Win This Week

Channel programs News

Rick Whiting

For the week ending July 8 CRN takes a look at the companies that brought their ‘A’ game to the channel.

 ARTICLE TITLE HERE

The Week Ending July 8

Topping this week’s Came to Win list is IBM for a strategic acquisition in the data observability space.

Also making this week’s list are N-able for its own savvy acquisition in the Microsoft cloud arena, Broadcom for passing a milestone in its bid to acquire VMware, cybersecurity startup Swimlane for an impressive funding round, and Intel for proceeding with its plan to build an advanced chip manufacturing facility in Ohio.

IBM Goes Big On Data Observability With Databand Acquisition

IBM has made a move to expand its “DataOps” data management and governance portfolio with its acquisition of data observability technology provider Databand.

Israel-based Databand develops a “proactive” data observability platform for monitoring the health and quality of data used for operational and analytical tasks. AI and machine learning systems also rely on high-quality data to function effectively.

IBM expects to use Databand to strengthen its overall data management, AI and automation software offerings. Data observability software like Databand essentially adapt DevOps principles for DataOps purposes. Businesses and organizations use Databand to identify, troubleshoot and resolve data issues in near- real time.

IBM disclosed the acquisition this week, although it actually closed the deal on June 27.

N-able To Drive Microsoft 365, Azure Sales With Spinpanel Acquisition

Sticking with the Topic of strategic acquisitions, N-able is taking its Microsoft cloud business to the next level for its community of more than 25,000 MSPs by acquiring Spinpanel, which specializes in managing, automating and selling Microsoft 365 and Azure solutions.

Microsoft partners use Spinpanel tools to reduce complexity, optimize the use and value of their Microsoft cloud products, and profitably scale their Microsoft business.

Spinpanel, based in the Netherlands, operates a multitenant Microsoft 365 management and automation platform that Microsoft cloud service providers use to automate provisioning, security and management for Microsoft tenants, users and licenses in a single, consolidated hub.

Broadcom Gets One Step Closer To VMware Acquisition

A 40-day “go-shop” period in Broadcom’s agreement to buy VMware ended this week with no new offers, eliminating one possible hurdle to the $61 billion acquisition.

During the negotiated go-shop period VMware could entertain offers from other potential suitors. That could have set off a bidding war for virtualization tech giant VMware and—at the very least— delayed Broadcom’s timetable to complete the acquisition by the end of its fiscal 2023.

No other offers have emerged. VMware’s board has approved the deal, as has Michael Dell, VMware’s chairman and largest shareholder. But the acquisition still faces hurdles including winning approval from U.S. and European regulators.

Cybersecurity Startup Swimlane Raises $70M For Global Expansion

Security automation startup Swimlane raised an impressive $70 million in Series C funding this week as the company prepares for a major expansion of its channel business.

The company will use the cash infusion for a number of purposes, including expanding its marketing and partnership programs on a global scale.

Around 70 percent to 75 percent of Swimlane’s business in North America is through the channel. But the company will rely heavily on channel partners as it expands into Europe and Asia.

Intel Takes Next Steps In Building Ohio Semiconductor Fab

Intel has acquired the land near Columbus, Ohio, where it plans to build advanced semiconductor manufacturing facilities, the chipmaker confirmed this week.

The company has begun preparing the approximately 750-acre site where initial plans call for building two semiconductor manufacturing plants.

Word of the development activity follows recent reports that Intel was delaying the project because the U.S. Congress hasn’t appropriated funding for the U.S. semiconductor industry under the CHIPS For America Act. That legislation provides tax credits for American microchip manufacturers and allocates $50 billion for chip fabrication incentives. Some of that appropriation is expected to help fund the construction of the Ohio facility.

An Intel spokesperson did say that the company is postponing an official ceremony at the site while it awaits the funding but is proceeding with the site preparation work. The new world-class manufacturing facility is expected to reduce Intel’s reliance on third-party fabs for manufacturing.

Rick Whiting

Rick Whiting has been with CRN since 2006 and is currently a feature/special projects editor. Whiting manages a number of CRN’s signature annual editorial projects including Channel Chiefs, Partner Program Guide, Big Data 100, Emerging Vendors, Tech Innovators and Products of the Year. He also covers the Big Data beat for CRN. He can be reached at rwhiting@thechannelcompany.com.

Fri, 08 Jul 2022 18:38:00 -0500 en text/html https://www.crn.com/news/channel-programs/5-companies-that-came-to-win-this-week-july-8
Killexams : Governance & Risk Management

Please fill out the following fields:

Subscription Preferences:

Mon, 01 Aug 2022 12:00:00 -0500 en text/html https://www.govinfosecurity.com/governance-risk-management-c-93
00M-624 exam dump and training guide direct download
Training Exams List