100% updated and valid C9020-667 free pdf that works great

You will notice the adequacy of our IBM New Workloads Sales V1 Exam Questions that we get ready by gathering every single legitimate C9020-667 inquiry from concerned individuals. Our group tests the legitimacy of C9020-667 bootcamp before they are at last included our C9020-667 PDF Download. Enlisted applicants can download refreshed C9020-667 Cheatsheet in only a single tick and get ready for a genuine C9020-667 test.

Exam Code: C9020-667 Practice test 2022 by Killexams.com team
IBM New Workloads Sales V1
IBM Workloads Practice Test
Killexams : IBM Workloads practice test - BingNews https://killexams.com/pass4sure/exam-detail/C9020-667 Search results Killexams : IBM Workloads practice test - BingNews https://killexams.com/pass4sure/exam-detail/C9020-667 https://killexams.com/exam_list/IBM Killexams : Making the DevOps Pipeline Transparent and Governable

Subscribe on:


Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I'm sitting down with David Williams from Quali. David, welcome. Thanks for taking the time to talk to us today.

David Williams: Thanks, Shane. It's great to be here.

Shane Hastie: Probably my first starting point for most of these conversations is who's David?

Introductions [00:23]

David, he's a pretty boring character, really. He's been in the IT industry all his life, so there's only so many parties you can go and entertain people with that subject now. But I've been working since I first went to school. My first jobs were working in IT operations in a number of financial companies. I started at the back end. For those of you who want to know how old I was, I remember a time when printing was a thing. And so decorating was my job, carrying tapes, separating print out, doing those sort of things. So really I got a very grassroots level of understanding about what technology was all about, and it was nowhere near as glamorous as I've been full to believe. So I started off, I'd say, working operations. I've worked my way through computer operations systems administration, network operations. So I used to be part of a NOC team, customer support.

David Williams: I did that sort of path, as low as you can get in the ladder, to arguably about a rung above. And then what happened over that period of time was I worked a lot with distributed systems, lights out computing scenarios, et cetera and it enabled me to get more involved in some of the development work that was being done, specifically to manage these new environments, specifically mesh computing, clusters, et cetera. How do you move workloads around dynamically and how does the operating system become much more aware of what it's doing and why? Because obviously, it just sees them as workloads but needed to be smarter. So I got into development that way, really. I worked for Digital Equipment in its heyday, working on clusters and part of the team that was doing the operating system work. And so that, combined with my knowledge of how people were using the tech, being one of the people that was once an operations person, it enabled me as a developer to have a little bit of a different view on what needed to be done.

And that's what really motivated me to excel in that area, because I wanted to make sure that a lot of the things that were being built could be built in support of making operations simpler, making the accountability of what was going on more accountable to the business, to enable the services to be a little more transparent in how IT was using them around. So that throughout my career, luckily for me, the tech industry reinvents itself in a very similar way every seven years. So I just have to wait seven years to look like one of the smart guys again. So that's how I really got into it from the get go.

Shane Hastie: So that developer experience is what we'd call thinking about making it better for developers today. What are the key elements of this developer experience for us?

The complexity in the developer role today [02:54]

David Williams: When I was in development, the main criteria that I was really responsible for was time. It was around time and production rates. I really had no clue why I was developing the software. Obviously, I knew what application I was working on and I knew what it was, but I never really saw the results. So over the years, I wasn't doing it for a great amount of time, to be honest with you. Because when I started looking at what needed to be done, I moved quite quickly from being a developer into being a product manager, which by the way, if you go from development to product management, it's not exactly a smooth path. But I think it was something that enabled me to be a better product manager at the time, because then I understood the operations aspects, I was a developer and I understood what it was that made the developer tick because that's why I did it.

It was a great job to create something and work on it and actually show the results. And I think over the years, it enabled me to look at the product differently. And I think that as a developer today, what developers do today is radically more advanced than what I was expected to do. I did not have continuous delivery. I did not really have a continuous feedback. I did not have the responsibility for testing whilst developing. So there was no combined thing. It was very segmented and siloed. And I think over the years, I've seen what I used to do as an art form become extremely sophisticated with a lot more requirements of it than was there. And I think for my career, I was a VP of Products at IBM Tivoli, I was a CTO at BMT software, and I worked for CA Technology prior to its acquisition by Broadcom, where I was the Senior Vice President of Product Strategy.

But in all those jobs, it enabled me to really understand the value of the development practices and how these practices can be really honed in, in support between the products and the IT operations world, as well as really more than anything else, the connection between the developer and the consumer. That was never part of my role. I had no clue who was using my product. And as an operations person, I only knew the people that were unhappy. So I think today's developer is a much more... They tend to be highly skilled in a way that I was not because coding is part of their role. Communication, collaboration, the integration, the cloud computing aspects, everything that you have to now include from an infrastructure is significantly in greater complexity. And I'll summarize by saying that I was also an analyst for Gartner for many years and I covered the DevOps toolchains.

And the one thing I found out there was there isn't a thing called DevOps that you can put into a box. It's very much based upon a culture and a type of company that you're with. So everybody had their interpretation of their box. But one thing was very common, the complexity in all cases was significantly high and growing to the point where the way that you provision and deliver the infrastructure in support of the code you're building, became much more of a frontline job than something that you could accept as being a piece of your role. It became a big part of your role. And that's what really drove me towards joining Quali, because this company is dealing with something that I found as being an inhibitor to my productivity, both as a developer, but also when I was also looking up at the products, I found that trying to work out what the infrastructure was doing in support of what the code was doing was a real nightmare.

Shane Hastie: Let's explore that when it comes, step back a little bit, you made the point about DevOps as a culture. What are the key cultural elements that need to be in place for DevOps to be effective in an organization?

The elements of DevOps culture [06:28]

David Williams: Yeah, this is a good one. When DevOps was an egg, it really was an approach that was radically different from the norm. And what I mean, obviously for people that remember it back then, it was the continuous... Had nothing to do with Agile. It was really about continuous delivery of software into the environment in small chunks, microservices coming up. It was delivering very specific pieces of code into the infrastructure, continuously, evaluating the impact of that release and then making adjustments and change in respect to the feedback that gave you. So the fail forward thing was very much an accepted behavior, what it didn't do at the time, and it sort of glossed over it a bit, was it did remove a lot of the compliance and regulatory type of mandatory things that people would use in the more traditional ways of developing and delivering code, but it was a fledging practice.

And from that base form, it became a much, much bigger one. So really what that culturally meant was initially it was many, many small teams working in combination of a bigger outcome, whether it was stories in support of epics or whatever the response was. But I find today, it has a much bigger play because now it does have Agile as an inherent construct within the DevOps procedures, so you've got the ability to do teamwork and collaboration and all the things that Agile defines, but you've also got the continuous delivery part of that added on top, which means that at any moment in time, you're continually putting out updates and changes and then measuring the impact. And I think today's challenge is really the feedback loop isn't as clear as it used to be because people are starting to use it for a serious applications delivery now.

The consumer, which used to be the primary recipient, the lamp stacks that used to be built out there have now moved into the back end type of tech. And at that point, it gets very complex. So I think that the complexity of the pipeline is something that the DevOps team needs to work on, which means that even though collaboration and people working closely together, it's a no brainer in no matter what you're doing, to be honest. But I think that the ability to understand and have a focused understanding of the outcome objective, no matter who you are in the DevOps pipeline, that you understand what you're doing and why it is, and everybody that's in that team understands their contribution, irrespective of whether they talk to each other, I think is really important, which means that technology supporting that needs to have context.

I need to understand what the people around me have done to be code. I need to know what stage it's in. I need to understand where it came from and who do I pass it to? So all that needs to be not just the cultural thing, but the technology itself also needs to adhere to that type of practice.

Shane Hastie: One of the challenges or one of the pushbacks we often hear about is the lack of governance or the lack of transparency for governance in the DevOps space. How do we overcome that?

Governance in DevOps [09:29]

David Williams: The whole approach of the DevOps, initially, was to think about things in small increments, the bigger objective, obviously being the clarity. But the increments were to provide lots and lots of enhancements and advances. When you fragmented in that way and supply the ability for the developer to make choices on how they both code and provision infrastructure, it can sometimes not necessarily lead to things being unsecure or not governed, but it means that there's different security and different governance within a pipeline. So where the teams are working quite closely together, that may not automatically move if you've still got your different testing team. So if your testing is not part of your development code, which in some cases it is, some cases it isn't, and you move from one set of infrastructure, for example, that supports the code to another one, they might be using a completely different set of tooling.

They might have different ways with which to measure the governance. They might have different guardrails, obviously, and everything needs to be accountable to change because financial organizations, in fact, most organizations today, have compliance regulations that says any changes to any production, non-production environment, in fact, in most cases, requires accountability. And so if you're not reporting in a, say, consistent way, it makes the job of understanding what's going on in support of compliance and governance really difficult. So it really requires governance to be a much more abstract, but end to end thing as opposed to each individual stay as its own practices. So governance today is starting to move to a point where one person needs to see the end to end pipeline and understand what exactly is going on? Who is doing what, where and how? Who has permissions and access? What are the configurations that are changing?

Shane Hastie: Sounds easy, but I suspect there's a whole lot of... Again, coming back to the culture, we're constraining things that for a long time, we were deliberately releasing.

Providing freedom withing governance constraints [11:27]

David Williams: This is a challenge. When I was a developer of my choice, it's applicable today. When I heard the word abstract, it put the fear of God into me, to be honest with you. I hated the word abstract. I didn't want anything that made my life worse. I mean, being accountable was fine. When I used to heard the word frameworks and I remember even balking at the idea of a technology that brought all my coding environment into one specific view. So today, nothing's changed. A developer has got to be able to use the tools that they want to use and I think that the reason for that is that with the amount of skills that people have, we're going to have to, as an industry, get used to the fact that people have different skills and different focuses and different preferences of technology.

And so to actually mandate a specific way of doing something or implementing a governance engine that inhibits my ability to innovate is counterproductive. It needs to have that balance. You need to be able to have innovation, freedom of choice, and the ability to use the technology in the way that you need to use to build the code. But you also need to be able to provide the accountability to the overall objective, so you need to have that end to end view on what you're doing. So as you are part of a team, each team member should have responsibility for it and you need to be able to provide the business with the things that it needs to make sure that nothing goes awry and that there's nothing been breached. So no security issues occurring, no configurations are not tracked. So how do you do that?

Transparency through tooling [12:54]

David Williams: And as I said, that's what drove me towards Quali, because as a company, the philosophy was very much on the infrastructure. But when I spoke to the CEO of the company, we had a conversation prior to my employment here, based upon my prior employer, which was a company that was developing toolchain products to help developers and to help people release into production. And the biggest challenge that we had there was really understanding what the infrastructure was doing and the governance that was being put upon those pieces. So think about it as you being a train, but having no clue about what gauge the track is at any moment in time. And you had to put an awful lot of effort into working out what is being done underneath the hood. So what I'm saying is that there needed to be something that did that magic thing.

It enabled you with a freedom of choice, captured your freedom of choice, translated it into a way that adhered it to a set of common governance engines without inhibiting your ability to work, but also provided visibility to the business to do governance and cost control and things that you can do when you take disparate complexity, translate it and model it, and then actually provide that consistency view to the higher level organizations that enable you to prove that you are meeting all the compliance and governance rules.

Shane Hastie: Really important stuff there, but what are the challenges? How do we address this?

The challenges of complexity [14:21]

David Williams: See, the ability to address it and to really understand why the problems are occurring. Because if you talk to a lot of developers today and say, “How difficult is your life and what are the issues?", the conversation you'll have with a developer is completely different than the conversation you'll have with a DevOps team lead or a business unit manager, in regards to how they see applications being delivered and coded. So at the developer level, I think the tools that are being developed today, so the infrastructure providers, for example, the application dictates what it needs. It's no longer, I will build an infrastructure and then you will layer the applications on like you used to be able to do. Now what happens is applications and the way that they behave is actually defining where you need to put the app, the tools that are used to both create it and manage it from the Dev and the Op side.

So really what the understanding is, okay, that's the complexity. So you've got infrastructure providers, the clouds, so you've got different clouds. And no matter what you say, they're all different impact, serverless, classic adoption of serverless, is very proprietary in nature. You can't just move one serverless environment from one to another. I'm sure there'll be a time when you might be able to do that, but today it's extremely proprietary. So you've got the infrastructure providers. Then you've got the people that are at the top layer. So you've got the infrastructure technology layer. And that means that on top of that, you're going to have VMs or containers or serverless something that sits on your cloud. And that again is defined by what the application needs, in respect to portability, where it lives, whether it lives in the cloud or it's partly an edge, wherever you want to put it.

And then of course on top of that, you've got all the things that you can use that enables you to instrument and code to those things. So you've got things like Helm charts for containers, and you've got a Terraform where developing the infrastructure as code pieces, or you might be using Puppet or Chef or Ansible. So you've got lots of tools out there, including all the other tools from the service providers themselves. So you've got a lot of the instrumentation. And so you've got that stack. So the skills you've got, so you've got the application defining what you want to do, the developer chooses how they use it in support of the application outcome. So really what you want to be able to do is have something that has a control plane view that says, okay, you can do whatever you want.

Visibility into the pipeline [16:36]

David Williams: These are the skills that you need. But if people leave, what do you do? Do you go and get all the other developers to try and debug and translate what the coding did? Wouldn't it be cool instead to have a set of tech that you could understand what the different platform configuration tools did and how they applied, so look at it in a much more consistent form. Doesn't stop them using what they want, but the layer basically says, "I know, I've discovered what you're using. I've translated how it's used, and I'm now enabling you to model it in a way that enables everybody to use it." So the skills thing is always going to exist. The turnover of people is also very extremely, I would say, more damaging than the skills because people come and go quite freely today. It's the way that the market is.

And then there's the accountability. What do the tools do and why do they do it? So you really want to also deal with the governance piece that we mentioned earlier on, you also want to provide context. And I think that the thing that's missing when you build infrastructure as code and you do all these other things is even though you know why you're building it and you know what it does to build it, that visibility that you're going to have a conversation with the DevOps lead and the business unit manager, wouldn't it be cool if they could actually work out that what you did is in support of what they need. So it has the application ownership pieces, for example, a business owner. These are the things that we provide context. So as each piece of infrastructure is developed through the toolchain, it adds context and the context is consistent.

So as the environments are moved in a consistent way, you actually have context that says this was planned, this was developed, and this is what it was done for. This is how it was tested. I'm now going to leverage everything that the developer did, but now add my testing tools on top. And I'm going to move that in with the context. I'm now going to release the technology until I deploy, release it, into either further testing or production. But the point is that as things get provisioned, whether you are using different tools at different stages, or whether you are using different platforms with which to develop and then test and then release, you should have some view that says all these things are the same thing in support of the business outcome and that is all to do with context. So again, why I joined Quali was because it provides models that provide that context and I think context is very important and it's not always mentioned.

As a coder, I used to write lots and lots of things in the code that gave people a clue on what I was doing. I used to have revision numbers. But outside of that and what I did to modify the code within a set of files, I really didn't have anything about what the business it was supporting it. And I think today with the fragmentation that exists, you've got to supply people clues on why infrastructure is being deployed, used, and retired, and it needs to be done in our life cycle because you don't want dormant infrastructure sitting out there. So you've got to have it accountable and that's where the governance comes in. So the one thing I didn't mention earlier on was you've got to have ability to be able to work out what you're using, why it's being used and why is it out there absorbing capacity and compute, costing me money, and yet no one seems to be using it.

Accountability and consistency without constraining creativity and innovation [19:39]

David Williams: So you want to be out of accountability and with context in it, that at least gives you information that you can rely back to the business to say, "This is what it cost to actually develop the full life cycle of our app, in that particular stage of the development cycle." So it sounds very complex because it is, but the way to simplify it is really to not abstract it, but consume it. So you discover it, you work out what's going on and you create a layer of technology that can actually provide consistent costing, through consistent tagging, which you can do with the governance, consistent governance, so you're actually measuring things in the same way, and you're providing consistency through the applications layer. So you're saying all these things happen in support, these applications, et cetera. So if issues occur, bugs occur, when it reports itself integrated with the service management tools, suddenly what you have there is a problem that's reported in response to an application, to a release specific to an application, which then associates itself with a service level, which enables you to actually do report and remediation that much more efficiently.

So that's where I think we're really going is that the skills are always going to be fragmented and you shouldn't inhibit people doing what they need. And I think the last thing I mentioned is you should have the infrastructure delivered in the way you want it. So you've got CLIs, if that's a preferred way, APIs to call it if you want to. But for those who don't have the skills, it's not a developer only world if I'm an abstraction layer and I'm more of an operations person or someone that doesn't have the deep diving code skills, I should need to see a catalog of available environments built by coding, built by the people that actually have that skill. But I should be able to, in a single click, provision an environment in support of an application requirement that doesn't require me to be a coder.

So that means that you can actually share things. So coders can code, that captures the environment. If that environment is needed by someone that doesn't have the skills, but it's consistently, because it has all that information in it, I can hit a click. It goes and provisions that infrastructure and I haven't touched code at all. So that's how you see the skills being leveraged. And you just got to accept the fact that people will be transient going forward. They will work from company to company, project to project, and that skills will be diverse, but you've got to provide a layer with which that doesn't matter.

Shane Hastie: Thank you very much. If people want to continue the conversation, where do they find you?

David Williams: They can find me in a number of places. I think the best place is I'm at Quali. It is David.W@Quali.com. I'm the only David W., which is a good thing, so you'll find me very easily. Unlike a plane I got the other day, where I was the third David Williams on the plane, the only one not to get an upgrade. So that's where you can find me. I'm also on LinkedIn, Dave Williams on LinkedIn can be found under Quali and all the companies that I've spoken to you about. So as I say, I'm pretty easy to find. And I would encourage, by the way, anybody to reach out to me, if they have any questions about what I've said. It'd be a great conversation.

Shane Hastie: Thanks, David. We really appreciate it.

David Williams: Thank you, Shane.


. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sun, 10 Jul 2022 13:55:00 -0500 en text/html https://www.infoq.com/podcasts/making-devops-pipeline-transparent/
Killexams : IBM showcases its Smarter Computing solutions in SL

Sri Lanka's IBM unit recently held a local showcase event for its Smarter Computing solutions platform, which was initially introduced globally in 2011. The event focused on the company's workload-optimisation hardware range, including Storwize V7000 storage, BladeCenter servers and the 'Starter Kit for Cloud', by way of presentations and product demonstrations.

In addition, the IBM Smarter Computing Workload Simulator was also on hand to test, virtually, online, "different hypotheses and evaluate areas of potential savings and efficiency through the lens of IBM Smarter Computing systems and technologies."

According to a company statement; "Some of the presentations, for example, showed how IBM storage solutions can help customers free up shrinking IT budgets through automation, true storage virtualisation and true cloud-based storage, to enable them to spend more on innovation."

IBM's statement also indicated what its Smarter Computing solutions platform was attempting to accomplish, stating: "This strategy centers around three fundamental aspects - leveraging analytics to exploit vast amounts of data for business goals, utilising optimised systems that are designed for specific tasks; and managing as much of the IT as possible with cloud-computing technologies."

Sat, 25 Feb 2012 13:12:00 -0600 text/html https://www.sundaytimes.lk/120226/BusinessTimes/bt34.html
Killexams : Delivery of Global Cancer Care: An International Study of Medical Oncology Workload

Cancer is now the second leading cause of death worldwide. There is a disproportionately high burden in low- and low-middle–income countries (LMICs), where the mortality-to-incidence ratio is double that of high-income countries1-3 Although this is driven by a number of complex factors (including more advanced stage of disease at presentation), access to oncologists and the necessary infrastructure to deliver treatment are likely contributing factors. Cancer control efforts in LMICs are further challenged by the existing paradox in cancer funding; despite accounting for 62% of global cancer mortality, 5% of global cancer funding is directed to LMICs.4 It is therefore unlikely that mortality and incidence trends in LMICs will Excellerate without a shift in global cancer policy.

Oncology workload metrics for LMICs are scarce. Limited data from high-income countries (HICs) have described clinical workload and proposed targets.5-7 However, this has not been done on a global scale and does not include LMICs. To develop an effective global cancer policy and bridge gaps in the delivery of cancer care, an understanding of global oncology workload is crucial. To address this gap in knowledge, we undertook a global study to describe the (1) clinical workload of medical oncologists, (2) available infrastructure and supports, and (3) identified barriers to patient care. Data from this study will inform cancer policy and human resource planning in emerging and established cancer systems.

Study Population

The study population included any practicing physician who delivers chemotherapy; trainees were not eligible. The Web-based survey was distributed using a modified snowball methodology. As a means of identifying potential participants, the senior investigator (C.M.B.) contacted one oncologist in 54 countries and two regions (Caribbean and Africa) to invite study participation. Contact was preferentially directed to established national associations of medical oncologists. If this was not possible, C.M.B. approached one personal contact per country to invite participation and distribute the survey via an informal national network; this contact remained the sole source of survey distribution in the country. This study was approved by the Research Ethics Board of Queen’s University.

Survey Design and Distribution

An online electronic survey questionnaire was developed via Fluid Surveys to capture the following information: participant demographics, clinical practice setting, clinical workload, and barriers to patient care. The survey was designed with multidisciplinary input of the study investigators who practice in diverse environments from LMICs, upper-middle–income countries (UMICs), and HICs. The survey was then piloted and subsequently revised based on feedback from 10 additional oncologists from diverse global backgrounds. The final survey included 51 questions and took 10 to 15 minutes to complete; the instrument is shown in the Data Supplement.

Distribution of this survey used two primary methods. The senior investigator (C.M.B.) contacted individuals and regional oncology associations to create a broad distribution network. Whether the regional contact was an association or an individual, they were provided with an electronic link to the survey to distribute to their regional membership/network. These links were unique to each nation, but not individualized. The distributing partners were asked to provide the team with the number of survey recipients to ascertain the national response rate for the survey. The survey was distributed in November 2016. A reminder e-mail was sent via all national/regional contacts in January 2017.

Statistical Analysis

Countries were classified into LMICs, UMICs, and HICs on the basis of World Bank criteria.8 The primary objective was to describe oncologist workload across LMICs, UMICs, and HICs; oncologist workload was defined as the annual number of consultations for new patients with cancer seen per oncologist. Because of a relatively small number of responses from low-middle–income African nations, we combined these responses into a region called LMIC Africa. All data were initially collected in Fluid Surveys and subsequently exported to IBM Statistical Package for the Social Sciences (SPSS) for Windows version 24.0 (SPSS, Armonk, NY). Pearson χ2 tests were used to test for the difference in proportions, and the Kruskal-Wallis test was used to compare ordinal and continuous data by income stratification. Data consisted of categorical, ordinal, and continuous formats, occasionally collected as ranges (eg, < 50, 51 to 100, 101 to 150, etc). In the latter case, medians were generated using the midpoint of the categorical range (eg, a median value of 101 to 150 would be reported as 125). Data were analyzed using IBM SPSS.

Survey Distribution and Response

Fifty-four countries and two regional networks (Africa and Caribbean) were invited to participate in this study; 42 countries/regional networks (75%) agreed to participate. Among participating countries, the survey was distributed via national medical oncology organizations in 62% of cases (26 of 42) and via an informal network of contacts in 38% of cases (16 of 42). Overall, 1,115 respondents from 65 different countries participated in this study. Survey response rates were available for 40% (17 of 42) of all countries/regional networks and ranged from 3% in Singapore and Portugal to 76% in Slovenia (Data Supplement). Among study participants, 70% (782 of 1,115), 17% (186 of 1,115), and 13% (147 of 1,115) were from HICs, UMICs, and LMICs, respectively. The mean response rate across all countries was 12% (461 of 3,967); it was 12% (30 of 255), 13% (30 of 235), and 12% (401 of 3,477) for LMIC, UMIC, and HIC countries, respectively (P = .85).

Characteristics of Study Participants

The median age of respondents was 44 years; 58% (647 of 1,110) were male (Table 1). The proportion of female respondents was higher in HICs (44%; 341 of 777) and UMICs (47%; 87 of 186) compared with LMICs (24%; 35 of 147; P < .001). Eighty-one percent (898 of 1,115) of all respondents were medical oncologists; the median number of years in practice was 10, with a median of 6 years of postgraduate training. Participants from LMICs were more likely to be clinical oncologists (ie, delivering chemotherapy and radiation; 20%; 29 of 147) than were those from UMICs (9%; 16 of 186) and HICs (9%; 67 of 782; P < .001). Participants in LMICs were less likely to have completed training in their current country of practice (82%; 120 of 147) compared with UMICs (91%; 170 of 186) and HICs (90%; 707 of 782; P = .004).


Table 1 Demographic and Clinical Practice Setting of Respondents to Global Medical Oncology Workload Survey Stratified by World Bank Economic Classification

Clinical Practice Setting

The proportion of respondents working exclusively in the public setting varied substantially: 29% (42 of 146) in LMICs, 38% (71 of 186) in UMICs, and 79% (620 of 782) in HICs (P < .001). Physicians in LMICs were more likely to work in a designated cancer hospital (48%; 70 of 147) compared with UMICs (36%; 66 of 186) and HICs (31%; 243 of 782; P < .001). Respondents from LMICs (39%; 58 of 147) were more likely to work within a smaller group (more than five) of chemotherapy providers compared with UMICs (26%; 48 of 186) and HICs (10%; 76 of 782; P < .001). On site radiation, palliative care, and chemotherapy pharmacists were less likely to be available at LMIC centers (80% [117 of 147], 71% [104 of 147], 63% [93 of 147] availability, respectively) compared with HICs (86% [669 of 782], 89% [693 of 782], 89% [697 of 782] availability, respectively; all P < .001). Electronic medical records were available less commonly in LMICs (50% [73 of 147] v 89% [691 of 782]; P < .001), and corresponding rates of handwritten clinic notes were much higher in LMICs compared with UMICs and HICs (82% [120 of 147] for LMICs v 46% [85 of 186] for UMICs and 25% [192 of 782] for HICs; P < .001).

Delivery of Clinical Care

LMIC respondents worked a median of 6 days per week, whereas both UMIC and HIC respondents reported working a median of 5 days per week (P < .001); 71% (104 of 147) of LMIC physicians worked 6 to 7 days per week compared with 21% (166 of 782) of HIC physicians. Median hours worked per week were 41 to 50 across all groups. LMIC and UMIC respondents reported a median of 4 and 3 weeks of paid vacation per year, respectively, compared with 5 weeks for HIC respondents (P < .001); 20% (29 of 147) of LMIC and 3% (23 of 782) of HIC physicians had no paid vacation. The median number of weeks of paid conference leave and the proportion of physicians with no paid conference leave for LMICs, UMICs, and HICs was 2 weeks (29%; 43 of 147), 1.5 weeks (20%; 37 of 186), and 2 weeks (10%; 77 of 782), respectively (P < .001). Although there was no substantial difference in the proportion of respondents who had on-call duties (68% [100 of 147], 63% [117 of 186], 72% [565 of 782] for LMIC, UMIC, and HIC, respectively); oncologists who took call in LMICs were more likely than UMIC or HIC physicians to be on call every night except when on vacation (60% [59 of 99] v 41% [48 of 116] and 17% [96 of 560]; P < .001). The mean percentage of time that study respondents spent on clinical, research, teaching, and administrative duties were consistent across the three groups (Table 2).


Table 2 Delivery of Clinical Care Reported by Respondents to Global Medical Oncology Survey Stratified by World Bank Economic Classification

Clinical Volumes

The median number of new consults per year among all respondents was 175; 13% (140 of 1,103) saw > 500 and 6% (69 of 1,103) saw > 1,000 new consults per year. Respondents from LMICs reported seeing significantly more consults (median, 425/y) than UMIC and HIC respondents (median, 175/y; P < .001). The proportion of oncologists in LMICs seeing > 500 (39%; 58 of 147) and > 1,000 (22%; 33 of 147) new consults was substantially higher than in UMICs (14%, 25 of 182; and 6%,11 of 182, respectively) and HICs (7%, 57 of 774; and 3%, 25 of 774, respectively; P < .001). Distribution of clinical workload across economic groups and among the top 10 countries is shown in Figure 1. The 10 highest-volume countries were Pakistan (975; 73% > 500 new consults), India (475, 43% > 500), Turkey (475; 27% > 500 new consults), LMIC Africa (375; 37% > 500 new consults), Italy (325; 32% > 500), China (275; 22% > 500), Hungary (225, 29% > 500), Slovenia (225; 12% > 500), Chile (225; 9% > 500), and Mexico (200; 21% > 500)

The number of patients seen in a full day of clinic varied across economic groups (LMIC, 25; UMIC, 25; HIC, 15; P < .001); 20% (30 of 147) of LMIC oncologists saw > 50 patients per day compared with 2% (12 of 774) in HICs (P < .001). Oncologists in LMICs were considerably more likely to treat all tumor types compared with those in UMICs and HICs (68% v 49% v 14%; P < .001). LMIC respondents reported less time per patient interaction (25 minutes per new consult) compared with UMIC and HIC respondents (35 minutes; P < .001). Wait time for new consults to be seen (measured from time of referral) was significantly shorter in LMICs (median wait, 0 days) compared with UMICs and HICs (4 to 7 days for each; P < .001); 56% (83 of 147) of LMIC oncologists reported seeing patients on the same day of referral/presentation. Participation in multidisciplinary case conferences varied across economic groups; 54% (80 of 147) of LMIC and 50% (93 of 186) of UMIC oncologists attended at least one multidisciplinary case conference per week compared with 80% (627 of 782) of HIC oncologists (P < .001; Table 3).


Table 3 Patient Care Case Volumes Reported by Respondents to a Global Survey of Medical Oncologists Stratified by World Bank Economic Classification

Satisfaction, Barriers, and Challenges

Self-reported job satisfaction (on a Likert scale; 1 = not satisfied, 10 = highly satisfied) did not vary across economic groups (median score, 8 in all groups). Despite lower clinical volumes, physicians in HICs (68%; 529 of 780) and UMICs (75%; 139 of 186) were more likely than oncologists in LMICs (52%; 76 of 147) to report high patient volumes as adversely affecting job satisfaction (P < .001). The most commonly reported barriers to clinical care in LMICs were patients not being able to pay for treatment and limited availability of new cancer therapies. The most common barriers reported in HICs were high clinical volumes and insufficient time to keep up with published literature (Table 4).


Table 4 Top Five Reported Barriers to Patient Care as Reported by Respondents to a Global Medical Oncology Workload Survey

This study offers insights into the clinical practice setting and workload of medical oncologists working in different contexts and resource settings. Several important findings emerge. First, there is a substantial difference in clinical workload across economic settings; oncologists in LMICs see significantly more patients, work more days, are more often on call, and have less vacation time than their global counterparts. Second, oncologists in LMICs are less likely to work in the public system and have less access to parallel cancer services, such as radiotherapy, palliative care, and multidisciplinary team meetings, than oncologists in UMICs and HICs. Third, the higher clinical volumes in LMICs are associated with less time spent with patients. Finally, we observed a disconnect between clinical volume and the reported barriers to patient care. Despite substantially lower patient volumes, oncologists in HICs and UMICs identify high clinical workload as a top barrier to patient care; the top barriers identified by oncologists in LMICs relate to patients being unable to pay for care and limited access to cancer therapies.

Our study results should be considered in light of existing literature on this topic. Our data confirm anecdotal reports that specialist case volumes in LMICs are substantially higher than in UMICs and HICs.9 Two latest studies have reported oncology workloads in HICs. In 2012, Blinman et al7 described a survey of 96 Australian medical oncologists reporting a mean 270 new patient consults per year. A 2013 survey of 33 New Zealand medical oncologists reported 220 new patient consults per year.6 These data are slightly higher than our own median value of 175 consults per year in HICs (and 175 in Australian respondents, specifically).

The Systemic Therapies Task Force established by Cancer Care Ontario in 2000 determined that 160 to 175 was the optimal annual target for medical oncology new consults.5 This number was derived by calculating the annual amount of hours per oncologist per year available for direct patient care and then dividing this number by a tumor-specific patient care time to calculate the number of annual new patient consults. The tumor-specific patient care time comprised the total number of hours that an oncologist should expect to dedicate to an average new patient for each tumor type over a 5-year period.5 Although the LMIC data in our study (425 consults per year) were substantially above this target, self-reported workload of UMIC and HIC respondents fell within this recommended range.

Despite seeing much higher volumes than their UMIC and HIC counterparts, a smaller proportion of our LMIC respondents listed high clinical volumes as a barrier to care. This highlights the fact that although LMIC nations likely have a shortage of oncologists, the delivery of cancer care in low-resource settings presents multifactorial challenges, with fundamental economic barriers being a more pressing issue than practitioner shortage. Accordingly, our data suggest that a standardized model of cancer care cannot be applied equally to LMIC, UMIC, and HIC countries and that an individualized approach is required.

Workload studies do exist in the field of radiation oncology. A latest European working group recommended a maximum number of consults per year of 250 for radiation oncologists.10 Previous radiation oncology workload studies from Japan (n = 194 to 291 annual new consults), Australia (n = 250), and Thailand (n = 296) suggest slightly higher new consult loads compared with medical oncologists.11-13 However, direct comparisons between medical oncology and radiation oncology new consult targets are of limited utility because the physician-level and system-level workload are different in each setting.

Existing literature on oncologist burnout provides a basis for comparison with some of our data. Shanafelt et al14 examined burnout and job satisfaction in a 2014 survey of 11,117 American oncologists. Compared with this study, our participants were younger (45 v 52 years), more recently in practice (10 v 22 years), and worked a comparable number of hours per week (41 to 50 v 46).14 HIC respondents in our study reported less time with new patients (35 minutes v 52 minutes). The Cancer Care Ontario analysis reported a comparable number of hours worked (48 hours per week).5 Glasberg et al15 completed a study of burnout among 102 Brazilian oncologists in 2007; they reported comparable working hours (< 50) to our UMIC respondents (41 to 50). The consistency between workload metrics in the aforementioned studies from the United States, Canada, Australia, and Brazil and workload reported by our HIC and UMIC respondents offers face validity to the results of our global study.

Our study results should be considered in light of methodologic limitations. As with any survey, respondents may not be representative of all providers in each system. Our results are further limited by the fact that 16 of 42 countries did not have a national association and relied on informal survey distribution by one contact oncologist. We also were unable to identify the denominator (ie, response rate) for many countries (Data Supplement). It is, however, reassuring that the response rate was comparable across LMICs, UMICs, and HICs. Workload data are self-reported and therefore may or may not accurately reflect true clinical volumes. Our study has a limited number of respondents from very low-income countries. We are also missing data from the United States and Russia; two of the world’s largest countries chose not to participate in this study. The LMIC group had the lowest number of respondents in our survey, indicating the difficulty of reaching this population of oncologists. Building on our results will require country-level analysis using more sophisticated sampling instruments to guide policy recommendations. Our results also provide comparative data that may be useful for individual health systems. Finally, delivery of systemic therapy is only one element of cancer care, and meaningful improvements in cancer care will require parallel initiatives in other allied clinical disciplines, such as radiation/surgical oncology, palliative care, pathology, radiology, nursing, and pharmacy.

Health care human resource (HHR) planning has been belatedly recognized as critical to achieving universal health coverage and the health targets of Sustainable Development Goals of the WHO. Most empirical work has been focused at the macro-level of HHR planning. There is uniform agreement that a demand-based shortage of 15 million or more health care workers will be the reality by 2030, with shortages being most acutely felt in middle-income countries, as well as East Asia and the Pacific.16 This crisis of human capital in health is one of availability (supply of qualified personnel), distribution (recruitment, retention where needed most), and performance (productivity and quality of care provided). There is, however, a dearth of cancer-specific HHR research. What has been done in surgery17 and radiotherapy18 has primarily focused on using worker-to-population ratios that ignore need, demands, and institutional frameworks. More focused HHR studies in cancer at the country level have also suffered from overmodeling and a lack of real-world data. However, even country-level data concur.19 The deficits among need, demand, and provision are wide and widening. This presents a fundamental challenge to the ability of global cancer to deliver its universal health coverage and Sustainable Development Goal commitments. The real-world data presented in our current work provide one aspect of a multimethodologic approach needed to study cancer HHR to inform policy. To drive changes in cancer HHR policy, a variety of supply-and-demand methods (needs-based, utilization or demands-based, workforce-to-population ratios, and target setting) will be required. Cancer care has one of the most complex HHR patterns in health care, and national-level studies are crucial to accurately inform long-term planning.20

In summary, we report substantial global variation in medical oncology case volumes and clinical workload; this is most striking among LMICs, where huge deficits exist. Additional work is needed, particularly detailed country-level mapping, to quantify activity-based global medical oncology practice and workload to inform training needs and the design of new pathways and models of care.

© 2017 by American Society of Clinical Oncology

C.M.B. is supported as the Canada Research Chair in Population Cancer Care. R.S. acknowledges the support of the National Cancer Institute Centre for Global Health. B.S. acknowledges the support of the Slovenian Research Agency.

The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/jco/site/ifc.

Adam Fundytus

No relationship to disclose

Richard Sullivan

Honoraria: Pfizer

Consulting or Advisory Role: Pfizer (Inst)

Verna Vanderpuye

No relationship to disclose

Bostjan Seruga

Honoraria: Astellas Pharma, Janssen Oncology, Novartis, Sanofi

Consulting or Advisory Role: Astellas Pharma, Sanofi, Janssen Oncology

Gilberto Lopes

Honoraria: AstraZeneca, Roche/Genentech, Merck Serono, Merck Sharp & Dohme, Fresenius Kabi, Novartis, Bristol-Myers Squibb, Janssen-Cilag, Boehringer Ingelheim, Pfizer, CIPLA, Sanofi, Eisai, Eli Lilly

Consulting or Advisory Role: Pfizer, Bristol-Myers Squibb, Eli Lilly/ImClone

Research Funding: Eli Lilly/ImClone, Pfizer, AstraZeneca, Merck Sharp & Dohme, Eisai, Bristol-Myers Squibb

Expert Testimony: Sanofi

Nazik Hammad

No relationship to disclose

Manju Sengar

No relationship to disclose

Wilma M. Hopman

No relationship to disclose

Michael D. Brundage

No relationship to disclose

Christopher M. Booth

No relationship to disclose

The authors gratefully acknowledge the following individuals who facilitated distribution of this global survey: Chris Karapetis, MD (Australia); Semir Beslija, MD (Bosnia); Bettina Muller, MD (Chile); Jaime Diaz, MD (Columbia); Denis Landaverde, MD (Costa Rica); Anneli Elme, MD (Estonia); Heikki Joensuu, MD (Finland); Christophe Letourneau, MD (France); Evangelia Razis, MD (Greece); Gyorgy Bodoky, MD (Hungary); Carmine Pinto, MD (Italy); Dingle Spence, MD (Jamaica); Hisato Kawakami, MD (Japan); Salem Al Shemmari, MD (Kuwait); Ahmad Radzi, MD (Malaysia); Samuel Rivera, MD (Mexico); Dean Harris, MD (New Zealand); Zeba Aziz, MD (Pakistan); Maria Bautista, MD (Philippines); Lius Da Costa, MD (Portugal); Alexandr Eniu, MD (Romania); Abdullah Altwairqi, MD (Saudi Arabia); Sinisa Radulovic, MD (Serbia); Ravi Kanesvaran, MD (Singapore); Alberto Ocana, MD (Spain); Mahilal Wijekoon, MD (Sri Lanka); Martin Erlanson, MD (Sweden); Armoud Templeton, MD (Switzerland); Mehmet Artac, MD (Turkey); Mohammed Ali Jaloudi, MD (United Arab Emirates); Johnathan Joffe, MD (United Kingdom); Jeanette Dickson, MD (United Kingdom); and Tuan Anh Pham, MD (Vietnam).

1. GBD 2013 Mortality and Causes of Death Collaborators: Global, regional, and national age-sex specific all-cause and cause-specific mortality for 240 causes of death, 1990-2013: A systematic analysis for the Global Burden of Disease Study. Lancet 385:117-171, 2015 Crossref, MedlineGoogle Scholar
2. Fitzmaurice C, Dicker D, Pain A: The global burden of cancer 2013. Oncology 1:505-527, 2015 Google Scholar
3. Goss PE, Strasser-Weippl K, Lee-Bychkovsky BL, et al: Challenges to effective cancer control in China, India, and Russia. Lancet Oncol 15:489-538, 2014 Crossref, MedlineGoogle Scholar
4. Ngoma T: World Health Organization cancer priorities in developing countries. Ann Oncol 17(suppl 8):viii9, 2006 Crossref, MedlineGoogle Scholar
5. Cancer Care Ontario Systemic Therapy Task Force: The Systemic Therapy Task Force report. https://www.cancercare.on.ca/common/pages/UserFile.aspx?fileId=14436 Google Scholar
6. Bidwell S, Simpson A, Sullivan R, et al: A workforce survey of New Zealand medical oncologists. N Z Med J 126:45-53, 2013 MedlineGoogle Scholar
7. Blinman PL, Grimison P, Barton MB, et al: The shortage of medical oncologists: The Australian Medical Oncologist Workforce Study. Med J Aust 196:58-61, 2012 Crossref, MedlineGoogle Scholar
8. The World Bank: World Bank country and lending groups. https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups Google Scholar
9. Li Q, Xie P: Outpatient workload in China. Lancet 381:1983-1984, 2013 Crossref, MedlineGoogle Scholar
10. Budiharto T, Musat E, Poortmans P, et al: Profile of European radiotherapy departments contributing to the EORTC Radiation Oncology Group (ROG) in the 21st century. Radiother Oncol 88:403-410, 2008 Crossref, MedlineGoogle Scholar
11. Phungrassami T, Funsian A, Sriplung H: 30 years of radiotherapy service in Southern Thailand: Workload vs resources. Asian Pac J Cancer Prev 14:7743-7748, 2013 Crossref, MedlineGoogle Scholar
12. Teshima T, Numasaki H, Shibuya H, et al: Japanese structure survey of radiation oncology in 2007 based on institutional stratification of patterns of care study. Int J Radiat Oncol Biol Phys 78:1483-1493, 2010 Crossref, MedlineGoogle Scholar
13. Leung J, Vukolova N: Faculty of Radiation Oncology 2010 workforce survey. J Med Imaging Radiat Oncol 55:622-632, 2011 Crossref, MedlineGoogle Scholar
14. Shanafelt TD, Gradishar WJ, Kosty M, et al: Burnout and career satisfaction among US oncologists. J Clin Oncol 32:678-686, 2014 LinkGoogle Scholar
15. Glasberg J, Horiuti L, Novais MAB, et al: Prevalence of the burnout syndrome among Brazilian medical oncologists. Rev Assoc Med Bras (1992) 53:85-89, 2007 Crossref, MedlineGoogle Scholar
16. Liu JX, Goryakin Y, Maeda A, et al: Global health workforce labor market projections for 2030. Hum Resour Health 15:11, 2017 Crossref, MedlineGoogle Scholar
17. Sullivan R, Alatise OI, Anderson BO, et al: Global cancer surgery: Delivering safe, affordable, and timely cancer surgery. Lancet Oncol 16:1193-1224, 2015 Crossref, MedlineGoogle Scholar
18. Atun R, Jaffray DA, Barton MB, et al: Expanding global access to radiotherapy. Lancet Oncol 16:1153-1186, 2015 Crossref, MedlineGoogle Scholar
19. Daphtary M, Agrawal S, Vikram B: Human resources for cancer control in Uttar Pradesh, India: A case study for low and middle income countries. Front Oncol 4:237, 2014 Crossref, MedlineGoogle Scholar
20. Lopes MA, Almeida ÁS, Almada-Lobo B: Handling healthcare workforce planning with care: Where do we stand? Hum Resour Health 13:38, 2015 Crossref, MedlineGoogle Scholar
Sun, 12 Jun 2022 05:59:00 -0500 en text/html https://ascopubs.org/doi/10.1200/JGO.17.00126
Killexams : Consider the Promises and Challenges of Medical Image Analyses Using Machine Learning

Medical imaging saves millions of lives each year, helping doctors detect and diagnose a wide range of diseases, from cancer and appendicitis to stroke and heart disease. Because non-invasive early disease detection saves so many lives, scientific investment continues to increase. Artifical intelligence (AI) has the potential to revolutionize the medical imaging industry by sifting through mountains of scans quickly and offering providers and patients with life-changing insights into a variety of diseases, injuries, and conditions that may be hard to detect without the supplemental technology.

Images are the largest source of data in healthcare and, at the same time, one of the most challenging sources to analyze. Clinicians today must rely mainly on medical image analysis performed by overworked radiologists and sometimes analyze scans themselves. The interpretations of medical data are being made mostly by a medical expert. In terms of image interpretation by a human expert, it is entirely limited given its subjectivity, the complexity of the image, the extensive variations that exist across different interpreters, and fatigue.

Despite constant advances in the medical imaging space, almost one in four patients experiences false positives on image readings. This can lead to unnecessary invasive procedures and follow-up scans that add cost and stress for patients. And while false negatives happen less often, the impact can be catastrophic. The surprisingly high rate of false positives is due in part to concerns among radiologists about missing a diagnosis. Late detection of disease significantly drives up treatment costs and reduces survival rates.

This is a situation set to change, though, as pioneers in medical technology apply AI to image analysis. The latest deep-learning algorithms are already enabling automated analysis to provide accurate results that are delivered immeasurably faster than the manual process can achieve. As these automated systems become pervasive in the healthcare industry, they may bring about radical changes in the way radiologists, clinicians, and even patients use imaging technology to monitor treatment and Excellerate outcomes.

AI applications for radiology use deep-learning algorithms and analytics to assess images for tumors or suspicious lesions systematically and to provide detailed reports on their findings instantly. These systems are trained on labeled data to identify anomalies. When a new image is submitted, the algorithm applies its training to differentiate normal vs. abnormal structures (e.g., benign/malignant). As these tools become more sensitive, they will also potentially enable earlier diagnosis of disease because they will be able to identify small variances in an image that is not easily spotted by the human eye. They can also be used to track treatment progress, recording changes in the size and density of tumors over time that can inform treatment, and to verify progress in clinical studies.

The latest machine-learning, deep-learning, and workflow automation technology can accelerate interpretation, Excellerate accuracy, and reduce repetition for radiologists and other specialties. The truth is that most departmental picture archiving and communication systems (PACS) still don't provide the underlying infrastructure that enables these technologies to thrive. Interpreting and analyzing images requires easy access and free flow of imaging to work effectively. However, studies are still often buried on CDs, file servers, or multiple hard-to-search locations, putting them out of reach of the latest processing algorithms. It’s just one of the reasons why organizations are focused on consolidating and integrating imaging into one archive—to turn it into a strategic asset.

Recent studies show that artificial intelligence algorithms can help radiologists Excellerate the speed and accuracy of interpreting X-rays, CT scans, and other types of diagnostic images. Putting the technology into everyday clinical use, however, is challenging because of the complexities of development, testing, and obtaining regulatory approval.

Radiology algorithms focus narrowly on a single finding on images from a single imaging modality, for example, lung nodules on a chest CT scan. While this may be useful in improving diagnostic speed and accuracy in specific cases, the bottom line is an algorithm can only answer one question at a time. Because there are many types of images and thousands of potential findings and diagnoses, each would require a purpose-built algorithm. In contrast, a radiologist considers a myriad of questions and conclusions at once for every imaging test as well as incidental findings unrelated to the original reason for the review, which is quite common.

Accordingly, to fully support just the diagnostic part of radiologists’ work, developers would need to create, train, test, seek FDA clearance for, distribute, support, and update thousands of algorithms. And healthcare organizations and doctors would need to find, evaluate, purchase, and deploy numerous algorithms from many developers, then incorporate them into existing workflows. Compounding the challenge is deep-learning models’ voracious demand for data. Most models have been developed in controlled settings using available, and often narrow, data sets—and the results that algorithms produce are only as robust as the data used to create them. AI models can be brittle, working well with data from the environment in which they were developed but faltering when applied to data generated at other locations with different patient populations, imaging machines, and techniques.

While AI marketplaces should foster widespread adoption of AI in radiology, they also have the potential to help alleviate radiologist burnout by augmenting and assisting them in two ways. The first, through the iterative development process, is by facilitating the design of algorithms that integrate seamlessly into radiologists’ workflows and simplify them. The second is by improving the speed and quality of radiology reporting. These algorithms can automate repetitive tasks and act as virtual residents, pre-processing images to highlight potentially essential findings, making measurements and comparisons, and automatically adding data and clinical intelligence to the report for the radiologist’s review.

By taking over routine tasks, adding quality checks, and enhancing diagnostic accuracy, AI algorithms can be expected to Excellerate clinical outcomes. For example, an FDA-cleared model automatically assesses breast density on digital mammograms, as dense breast tissue has been associated with an increased risk of breast cancer. By handling and standardizing that routine but essential task, the algorithm helps direct their attention to patients at the highest risk. Also, AI algorithms have proven equal to, and in some cases better than, an average radiologist at identifying breast cancer on screening mammograms.

As the population ages, the need for diagnostic radiology will surely increase. Meanwhile, radiology residency programs in the United States have only recently begun to reverse a multi-year decline in enrollments, raising the specter of a shortage of radiologists as the need for them grows. The latest emergence of AI marketplaces can accelerate the adoption of AI algorithms, helping to manage increasing workloads while providing doctors with tools to Excellerate diagnoses, treatments, and, ultimately, patient outcomes.

Machine learning and AI technology are gaining ground in medical imaging. For many health IT leaders, machine learning is a welcome tool to help manage the growing volume of digital images, reduce diagnostic errors, and enhance patient care. Despite its benefits, some radiologists are concerned that this technology will diminish their role, as algorithms start to take a more active part in the image interpretation process while ingesting volumes of data far beyond what any human can do.

How Machine Learning Works

In traditional predictive modeling, researchers develop a hypothesis about how distinct inputs predict some particular outcome, and then they test their theories against data. In contrast, machine learning is the process of algorithmically turning raw data into new knowledge without being explicitly programmed. Machine-learning tools can analyze an immense amount of data to discover relationships and combinations of variables to propose a predictive model back to the researcher. These tools draw out rules from repositories of past knowledge to build an algorithmic foundation that can then analyze, and continually learn from, real-time data. These algorithms mimic how humans learn complex concepts. Machine learning is associated with computer-aided detection (CAD), and as a technique, it can be used to develop more powerful CAD algorithms.

Machine-learning tools can collect data across various IT systems, such as electronic health records (EHRs), laboratory information systems, and radiology and cardiology PACS. Other forms of data can be unstructured, including text in books, guidelines, or publications.

When it comes to medical imaging, there are ways to characterize and extract textures, shapes, and colors associated with various types of disease. After analyzing a database of existing images—which can reach billions in volume—a machine-learning algorithm can start to recognize patterns (while minimizing false positives) and automatically flag abnormalities within new images for more informed decision making.

Algorithms for image analysis and decision support have been developed for decades, but most of them have not found their way into clinical practice. Nevertheless, many IT vendors and healthcare providers have made strides in the imaging space.

Benefits of Machine Learning

Machine learning—and CAD applications in general—show promise, and radiologists have much to gain from incorporating this technology into their operations given the following:

  • AI can evaluate an enormous number of imaging variables much faster, and more consistently, than a radiologist.
  • Algorithms facilitate decision making and education for inexperienced radiologists.
  • CAD can automate mundane studying and measurement tasks, freeing radiologists to focus on patient interaction, research, and complex higher-order thinking.
  • Machine learning can automate radiologist workflow, placing more time-sensitive cases higher on the radiologist’s worklist.
  • Machines have the potential to Excellerate diagnostic accuracy dramatically, prevent medical errors, and reduce the overuse of testing.
  • Machine learning can act as a next-generation clinical decision support tool for radiologists, offering segmentation, classification, and pattern recognition that can be used to propose statistically significant guidance for image analysis.
  • Analyzing images can be highly subjective; machines replace subjectivity and reader variability with quantitative measurements that can Excellerate patient outcomes.

Challenges of Machine Learning

Despite the potential benefits that machine learning brings to medical imaging, these challenges need to be addressed before widespread adoption occurs:

  • Many radiologists worry that the increased use of machine learning will lead to fewer jobs or a diminished role, which can cause some of them to resist technology.
  • Devices that conduct diagnostic interpretation are labeled class III devices by the U.S. FDA. This class label makes it challenging and time-consuming to gain approval for use. Class II devices avoid the diagnosis and offer only features of measurement (e.g., raising a red flag on an image), which is a more straightforward pathway to FDA approval.
  • Healthcare organizations that rely on machine learning open the door for potential legal trouble if an algorithm leads to misdiagnosis or medical error.
  • Building machine-learning algorithms are complicated and require massive inputs of clinical and peer-reviewed data to learn rules to evaluate new images.
  • Most CAD algorithms address specific tasks or conditions. It is challenging to develop generalized algorithms that apply to broad sets of scenarios.
  • Although image analysis and decision-support projects have been around for years, many do not advance past a piloting phase.
  • The "black box" effect: algorithms can identify an image object as abnormal but cannot explain why it was determined or supply more granular details to the radiologist.

The radiology community has had mixed feelings about the use of AI, with some portraying the technology as a boon to medical imaging. In contrast, others believe that AI is many years away (if not decades) from replicating the work of radiologists.

A popular Topic of discussion is whether machine learning will displace much of the work of radiologists (and of other groups, such as anatomical pathologists). Proponents of this view claim that organizations waste time and resources having humans interpreting diagnostic images when algorithms can process higher volumes at a lower cost. Some stakeholders advocate the use of algorithms because they feel it results in excellent patient safety since algorithms are not burdened by stress or exhaustion.

On the other hand, other stakeholders do not see machines “taking over” the field, but rather working in a supplemental role. They argue that a machine's role is not to replace the radiologist but to enhance a radiologist’s ability to identify and correctly diagnose any problems that appear on diagnostic images. Machine learning gives radiologists a way to manage the exponential growth in imaging volumes, while occasionally highlighting features that may have been overlooked. Having access to this “virtual consultant” can also bolster strategic partnerships with referring physicians, as radiologists will have greater insight for interpreting images. As far as risk goes, healthcare organizations that are risk-averse will always have aspects of medical image analysis that require manual review to mitigate ethical or legal concerns.

Timing is another significant factor. Skeptics point out that there have been thousands of machine-learning algorithms developed, but rarely do they advance from the research floor to clinical application. Furthermore, even if an algorithm is created that outperforms radiologists at all tasks across pilot stages, there is no clear timeline for how long it would take to verify those findings or get FDA approval to use it for diagnosis.

We will likely see continued incremental changes and specialized applications of machine-learning algorithms in the short term. Many AI vendors and their healthcare provider partners claim their technology will be ready to use in the next year or two for relatively well-characterized images (such as x-rays). Whether bullish or skeptical about the technology, most industry experts agree that over the next five to ten years, machine learning will become a powerful tool in radiology as it branches out to most other types of imaging modalities, including CT studies, MRI exams, and ultrasound.


Here are a few considerations for current and future machine-learning implementations:

  • Engage all stakeholders in the planning process. Machine learning has the potential to revolutionize medical imaging. Radiologists can use this technology to make volumes of data actionable, streamline workflow, and ultimately Excellerate patient outcomes. However, machine-learning initiatives can fail if healthcare organizations do not address existing cultural resistance to new IT systems or quell the fear that AI will make the radiologist role obsolete.
  • Be mindful of your scope of application and implementation timeline. Many machine-learning algorithms are narrow in their application, working across select modalities to inform decisions on specific diseases. Although compelling cases exist in imaging, many machine-learning tools are still under development and may take years before they are available for clinical use.
  • Incorporate machine learning as a complement to the radiology staff. Even when algorithms are accurate, radiologists still need to apply their judgment, using the algorithm as a secondary support system to optimize care. Researchers have shown that highly accurate algorithms can still be outperformed in diagnostic performance by experienced radiologists. On the other hand, inexperienced or non-specialist radiologists are more susceptible to mistakes and may fail to consider all variables systematically when studying images.

The views expressed in this article are attributed solely to the authors and not that of the company (IBM) they represent.

Tue, 26 Jul 2022 12:00:00 -0500 en text/html https://www.mddionline.com/radiological/consider-promises-and-challenges-medical-image-analyses-using-machine-learning
Killexams : Intel’s ATX12VO Standard: A Study In Increasing Computer Power Supply Efficiency

The venerable ATX standard was developed in 1995 by Intel, as an attempt to standardize what had until then been a PC ecosystem formed around the IBM AT PC’s legacy. The preceding AT form factor was not so much a standard as it was the copying of the IBM AT’s approximate mainboard and with it all of its flaws.

With the ATX standard also came the ATX power supply (PSU), the standard for which defines the standard voltage rails and the function of each additional feature, such as soft power on (PS_ON).  As with all electrical appliances and gadgets during the 1990s and beyond, the ATX PSUs became the subject of power efficiency regulations, which would also lead to the 80+ certification program in 2004.

Starting in 2019, Intel has been promoting the ATX12VO (12 V only) standard for new systems, but what is this new standard about, and will switching everything to 12 V really be worth any power savings?

What ATX12VO Is

As the name implies, the ATX12VO standard is essentially about removing the other voltage rails that currently exist in the ATX PSU standard. The idea is that by providing one single base voltage, any other voltages can be generated as needed using step-down (buck) converters. Since the Pentium 4 era this has already become standard practice for the processor and much of the circuitry on the mainboard anyway.

As the ATX PSU standard moved from the old 1.x revisions into the current 2.x revision range, the -5V rail was removed, and the -12V rail made optional. The ATX power connector with the mainboard was increased from 20 to 24 pins to allow for more 12 V capacity to be added. Along with the Pentium 4’s appetite for power came the new 4-pin mainboard connector, which is commonly called the “P4 connector”, but officially the “+12 V Power 4 Pin Connector” in the v2.53 standard. This adds another two 12 V lines.

Power input and output on the ASRock Z490 Phantom Gaming 4SR, an ATX12VO mainboard. (Credit: Anandtech)

In the ATX12VO standard, the -12 V, 5 V, 5 VSB (standby) and 3.3 V rails are deleted. The 24-pin connector is replaced with a 10-pin one that carries three 12 V lines (one more than ATX v2.x) in addition to the new 12 VSB standby voltage rail. The 4-pin 12 V connectors would still remain, and still require one to squeeze one or two of those through impossibly small gaps in the system’s case to get them to the top of the mainboard, near the CPU’s voltage regulator modules (VRMs).

While the PSU itself would be somewhat streamlined, the mainboard would gain these VRM sections for the 5 V and 3.3 V rails, as well as power outputs for SATA, Molex and similar. Essentially the mainboard would take over some of the PSU’s functions.

Why ATX12VO exists

A range of Dell computers and server which will be subject to California’s strict efficiency regulations.

The folk over at GamersNexus have covered their research and the industry’s thoughts on the Topic of ATX12VO in an article and video that were published last year. To make a long story short, OEM system builders and systems integrators are subject to pretty strong power efficiency regulations, especially in California. Starting in July of 2021, new Tier 2 regulations will come into force that add more strict requirements for OEM and SI computer equipment: see 1605.3(v)(5) (specifically table V-7) for details.

In order to meet these ever more stringent efficiency requirements, OEMs have been creating their own proprietary 12 V-only solutions, as detailed in GamersNexus’ recent video review on the Dell G5 5000 pre-built desktop system. Intel’s ATX12VO standard therefore would seem to be more targeted at unifying these proprietary standards rather than replacing ATX v2.x PSUs in DIY systems. For the latter group, who build their own systems out of standard ATX, mini-ITX and similar components, these stringent efficiency regulations do not apply.

The primary question thus becomes whether ATX12VO makes sense for DIY system builders. While the ability to (theoretically) increase power efficiency especially at low loads seems beneficial, it’s not impossible to accomplish the same with ATX v2.x PSUs. As stated by an anonymous PSU manufacturer in the GamersNexus article, SIs are likely to end up simply using high-efficiency ATX v2.x PSUs to meet California’s Tier 2 regulations.

Evolution vs Revolution

Seasonic’s CONNECT DC-DC module connected to a 12V PSU. (Credit: Seasonic)

Ever since the original ATX PSU standard, the improvements have been gradual and never disruptive. Although some got caught out by the negative voltage rails being left out when trying to power old mainboards that relied on -5 V and -12 V rails being present, in general these changes were minor enough to incorporate these into the natural upgrade cycle of computer systems. Not so with ATX12VO, as it absolutely requires an ATX12VO PSU and mainboard to accomplish the increased efficiency goals.

While the possibility of using an ATX v2.x to ATX12VO adapter exists that passively adapts the 12 V rails to the new 10-pin connector and boosts the 5 VSB line to 12 VSB levels, this actually lowers efficiency instead of increasing it. Essentially, the only way for ATX12VO to make a lot of sense is for the industry to switch over immediately and everyone to upgrade to it as well without reusing non-ATX12VO compatible mainboards and PSUs.

Another crucial point here is that OEMs and SIs are not required to adopt ATX12VO. Much like Intel’s ill-fated BTX alternative to the ATX standard, ATX12VO is a suggested standard that manufacturers and OEMs are free to adopt or ignore at their leisure.

Important here are probably the obvious negatives that ATX12VO introduces:

Internals of Seasonic’s CONNECT modular power supply. (Credit: Tom’s Hardware)

Add to this potential alternatives like Seasonic’s CONNECT module. This does effectively the same as the ATX12VO standard, removing the 5 V and 3.3 V rails from the PSU and moving them to an external module, off of the mainboard. It can be fitted into the area behind the mainboard in many computer cases, making for very clean cable management. It also allows for increased efficiency.

As PSUs tend to survive at least a few system upgrades, it could be argued that from an environmental perspective, having the minor rails generated on the mainboard is undesirable. Perhaps the least desirable aspect of ATX12VO is that it reduces the modular nature of ATX-style computers, making them more like notebook-style systems. Instead, a more reasonable solution here might be that of a CONNECT-like solution which offers both an ATX 24-pin and ATX12VO-style 10-pin connectivity option.

Thinking larger

In the larger scheme of power efficiency it can be beneficial to take a few steps back from details like the innards of a computer system and look at e.g. the mains alternating current (AC) that powers these systems. A well-known property of switching mode power supplies (SMPS) like those used in any modern computer is that they’re more efficient at higher AC input voltages.

Power supply efficiency at different input voltages. (Credit: HP)

This can be seen clearly when looking for example at the rating levels for 80 Plus certification. Between 120 VAC and 230 VAC line voltage, the latter is significantly more efficient. To this one can also add the resistive losses from carrying double the amps over the house wiring for the same power draw at 120 V compared to 230 VAC. This is the reason why data centers in North America generally run on 208 VAC according to this APC white paper.

For crypto miners and similar, wiring up their computer room for 240 VAC (North American hot-neutral-hot) is also a popular topic, as it directly boosts their profits.

Future Outlook

Whether ATX12VO will become the next big thing or fizzle out like BTX and so many other proposed standards is hard to tell. One thing which the ATX12VO standard has against it is definitely that it requires a lot of big changes to happen in parallel, and the creation of a lot of electronic waste through forced upgrades within a short timespan. If we consider that many ATX and SFX-style PSUs are offered with 7-10 year warranties compared to the much shorter lifespan of mainboards, this poses a significant obstacle.

Based on the sounds from the industry, it seems highly likely that much will remain ‘business as usual’. There are many efficient ATX v2.x PSUs out there, including 80 Plus Platinum and Titanium rated ones, and Seasonic’s CONNECT and similar solutions would appeal heavily to those who are into neat cable management. For those who buy pre-built systems, the use of ATX12VO is also not relevant, so long as the hardware is compliant to all (efficiency) regulations. The ATX v2.x standard and 80 Plus certification are also changing to set strict 2-10% load efficiency targets, which is the main target with ATX12VO.

What would be the point for you to switch to ATX12VO, and would you pick it over a solution like Seasonic CONNECT if both offered the same efficiency levels?

(Heading image: Asrock Z490 Phantom Gaming 4SR with SATA power connected, credit: c’t)

Fri, 05 Aug 2022 12:00:00 -0500 Maya Posch en-US text/html https://hackaday.com/2021/06/07/intels-atx12vo-standard-a-study-in-increasing-computer-power-supply-efficiency/ Killexams : Service Express Acquires iTech Solutions Group and iInTheCloud

GRAND RAPIDS, Mich., July 5, 2022 /PRNewswire/ -- Service Express, a leader in global data center and infrastructure solutions, today announces the acquisition of managed services provider iTech Solutions Group and cloud hosting provider iInTheCloud. The acquisitions bring expanded IBM-specific offerings and IBM Gold Business Partner status to Service Express customers in the U.S. This deal replicates the company's existing IBM services, expertise and IBM Gold Business Partner status in the U.K. to bring comprehensive solutions to customers internationally.

Service Express Acquires iTech Solutions Group and iInTheCloud

Based in Connecticut, iTech has over 20 years of experience offering expert solutions as an IBM Gold Business Partner. The company has worked alongside financial services, manufacturing and retail organizations, helping customers leverage IBM i infrastructure strategies. The acquisition of iTech brings a team of certified technical consultants, IBM i system administrators and skilled technicians with a deep understanding of IBM Power® Systems.

"At iTech, we take pride in our dedication to the success of our people and customers," said Pete Massiello, President of iTech Solutions Group. "Joining Service Express gives us the ability to expand our service offerings to existing and new customers, which was challenging as a smaller organization. I'm looking forward to working with the Service Express team to create synergy and bring our solutions to more companies around the globe."

In addition to hardware solutions, consulting and managed services, iTech provides customers with IBM i cloud hosting solutions by utilizing iInTheCloud infrastructure. iInTheCloud is a cloud hosting provider with Tier III data centers built on IBM Power Systems to deliver secure, scalable and resilient solutions for companies running IBM i, AIX and PowerLinux.

iInTheCloud allows organizations to host production, test, development or replicate environments to support disaster recovery and business continuity. The company's Michigan-based data centers grant companies access to redundant power, cooling and communication feeds ensuring environments are highly available.

"My focus has always been on creating flexible, reliable, and scalable solutions for IBM i and Power System customers around the Midwest," said Larry Bolhuis, Co-President of iInTheCloud. "With Service Express' headquarters only minutes from our data centers, the company has been the go-to service provider for many of our customers, as well as for iInTheCloud since its inception. I'm excited to expand our support options and capabilities for our new and existing customers."

Service Express offers an extensive portfolio of solutions, including data center maintenance, managed and infrastructure services designed to help customers maintain and evolve their digital IT strategies.

"The acquisition of iTech and iInTheCloud accelerate the expansion of our managed and infrastructure service offerings to the U.S.," said Ron Alvesteffer, President and CEO of Service Express. "We anticipate strong company growth as we continue to broaden and diversify our solutions, bringing a wider portfolio of services to customers in the U.S. and U.K."

For more information on Service Express and the company's solutions, visit serviceexpress.com.  

About Service Express 

Service Express is an industry-leading data center solutions provider specializing in global multivendor maintenance, hybrid cloud, managed and infrastructure services, hardware solutions and more. Companies around the globe trust Service Express to deliver reliable end-to-end support. Service Express' flagship technology, ExpressConnect®, helps IT teams automate support with monitoring, ticketing, integrations and account management. For more information, visit serviceexpress.com

About iTech Solutions Group

iTech Solutions, an IBM Gold Business Partner, helps organizations get the most performance, utilization and return on investment from existing or new IBM Power® Systems running IBM i, while ensuring critical business data is secure. Offerings include IBM Power Systems and storage, managed administration and OS subscription services, IBM i cloud hosting, OS Upgrades, PTF maintenance, HMC and FSP Upgrades, security assessments and remediation, HA replication solutions, DR testing, tape and disk encryption, virtual tape libraries, and more. For more information, visit itechsol.com.

About iInTheCloud

iInTheCloud specializes in IBM i cloud hosting solutions by leveraging IBM Power® and IBM Flash Systems in its Tier III data centers. The company works alongside organizations to deliver secure, scalable and resilient solutions to meet specific system requirements. With iInTheCloud's secure cloud, companies can consolidate workloads, increase server utilization, reduce capital costs, lower management costs, virtualize and provision memory, processor, and I/O resources. For more information, visit iinthecloud.com.

Cision View original content to get multimedia:https://www.prnewswire.com/news-releases/service-express-acquires-itech-solutions-group-and-iinthecloud-301579814.html

SOURCE Service Express

Mon, 04 Jul 2022 22:02:00 -0500 en-US text/html https://fox8.com/business/press-releases/cision/20220705DE06956/service-express-acquires-itech-solutions-group-and-iinthecloud/
Killexams : Future Chip Innovation Will Be Driven By AI-Powered Co-Optimization Of Hardware And Software

To say we’re at an inflection point of the technological era may be an obvious declaration to some. The opportunities at hand and how various technologies and markets will advance are nuanced, however, though a common theme is emerging. The pace of innovation is moving at a rate previously seen by humankind at only rare points in history. The invention of the printing press and the ascension of the internet come to mind as similar inflection points, but current innovation trends are being driven aggressively by machine learning and artificial intelligence (AI). In fact, AI is empowering rapid technology advances in virtually all areas, from the edge and personal devices, to the data center and even chip design itself.

There is also a self-perpetuating effect at play, because the demand for intelligent machines and automation everywhere is also ramping up, whether you consider driver assist technologies in the automotive industry, recommenders and speech recognition input in phones, or smart home technologies and the IoT. What’s spurring our latest voracious demand for tech is the mere fact that leading-edge OEMs, from big names like Tesla and Apple, to scrappy start-ups, are now beginning to realize great gains in silicon and system-level development beyond the confines of Moore’s Law alone.

Intelligent Co-Optimization Of Software And Hardware Leads To Rapid Innovation

In fact, what both of the aforementioned market leaders have recently demonstrated is that advanced system designs can only be fully optimized by taking a holistic approach, and tightly coupling software development and use-case workloads together with hardware chip-level design, such that new levels of advancement are realized that otherwise wouldn’t be possible if solely relying on semiconductor process and other hardware-focused advancements.

It used to be that hardware engineers would drive software engineering teams to complete full solutions. Now, however, best practice is based on much more of a co-development model. Consider Apple’s internal silicon development effort with its M1 series of processors for its MacBook and Mac mini portfolio. By engineering its own tightly-coupled, highly advanced solutions – with its software and application workloads considered during the design of the hardware — Apple has demonstrated time and time again how it can do more with less, with arguably some of the best performance-per-watt metrics in the PC industry. Likewise, Tesla realized if it were to achieve its lofty goals of full level 4 and 5 self-driving autonomy, that it would have to engineer its own custom silicon engines and systems, and as such the company has blazed a trail with its FSD (Full Self-Driving) chip technology. Again though, Tesla achieved this feat with the marriage of its own specialized application workloads and software driving its hardware development, not the other way around.

Obviously Apple and Tesla have huge resources and big budgets they can bring to bear to develop their own in-house chips and technology. However, new tools are emerging, once again bolstered by advancements in AI, which may allow even scrappy start-ups with much smaller design teams and budgets to roll their own silicon, or at least develop more optimized solutions that are more powerful and efficient, versus general purpose chips and off-the-shelf solutions.

AI-Assisted Chip Design Is Only The Beginning

And it’s in this area of chip design tools that companies like Synopsys are making great strides to usher in a new era of holistic design approaches for chip technologies, fueled by AI-enhanced automation. Previously, my firm partner Marco reported on Synopsys’ evolution of its DSO.ai technology that employs machine learning to drive dramatically faster place and route processes for design engineers. This is a critical step in the semiconductor design process, otherwise known as floor planning, as chip designs are mapped to silicon. The iterative nature of the process, targeted at optimizing for silicon area, power and performance goals, is a natural for machine learning and can dramatically Excellerate time to market and engineering man-hours, freeing up engineers to focus on new innovations.

I had a chance to speak with Synopsys President and COO, Sassine Ghazi who notes, “We’ve been spoiled by Moore’s Law for far too long.” What Sassine was alluding to here, was that simply moving to a newer process node was all that was needed historically to achieve significant performance, power and efficiency gains with many semiconductor designs. While, to an extent, this is still technically the case today, it has become obvious that innovation in other areas is necessary to achieve the larger gains that are necessary to help us address current market demands. “Today’s technology inflection point is demanding us to rethink design approaches, and transition from the constraints of scale complexity to drive innovation at systemic complexity levels.” Sassine continued, “This is, in-part, how we’ll realize the lofty goal of 1000X performance advancements set by many major market innovators like Intel, IBM and others.”

Ghazi also notes that the company is working on harnessing AI to accelerate and automate the design verification and validation process of chips, where the goal is to wring out anomalies and application marginalities before chips are sent to mass production and deployment. “Validation and verification are great opportunities for machine learning, where the AI can help not only time to market, but also expand the test coverage area, which can be especially critical for general purpose silicon that needs broader confidence in a wider range of applications.”

Moving forward, Ghazi also notes the company is striving to develop new tools that allow OEMs to validate and achieve silicon design goals by running their specialized software and application workloads directly into the front-end design process, while also utilizing machine learning to optimize chips based on this early critical input. Ghazi reports the company is targeting 2022 for early customer engagements specifically in these new optimization areas. In addition, as the complex chart highlights above, Synopsys is focused on automating and advancing all areas of modern, cutting-edge chip design in the future, in an effort to address new market demand and dynamics, allowing us to scale beyond just Moore’s Law-driven chip fab process advancements.

Regardless, Synopsys is not alone in this realization and, as Ghazi notes, “it’s going to take an entire industry” to further drive innovation to its fullest potential, and meet current and future market demands for new, critical enabling technologies. We’re in an age now when nearly anything is possible, from the metaverse to autonomous vehicles and commercialized space travel, and machine learning is at the nexus of it all.

Thu, 23 Dec 2021 23:00:00 -0600 Dave Altavilla en text/html https://www.forbes.com/sites/davealtavilla/2021/12/02/future-chip-innovation-will-be-driven-by-ai-powered-co-optimization-of-hardware-and-software/
Killexams : SingleStore announces $116M financing led by Goldman Sachs Asset Management

SingleStore, the cloud-native database built for speed and scale to power data-intensive applications, today announced it has raised $116 million in financing led by the growth equity business within Goldman Sachs Asset Management (Goldman Sachs) with new participation from Sanabil Investments. Current investors Dell Technologies Capital, GV, Hewlett Packard Enterprise (HPE), IBM ventures and Insight Partners, among others, also participated in the round.

“By unifying different types of workloads in a single database, SingleStore supports modern applications, which frequently run real-time analytics on transactional data,” said Holger Staude, managing director at Goldman Sachs. “The company aims to help organizations overcome the challenges of data intensity across multi-cloud, hybrid and on-prem environments, and we are excited to support SingleStore as it enters a new phase of growth.”

“Our purpose is to unify and simplify modern data,” said SingleStore CEO Raj Verma. “We believe the future is real time, and the future demands a fast, unified and high-reliability database — all aspects in which we are strongly differentiated. I am very excited to partner with Goldman Sachs, the beacon of financial institutions, and further expand our relationship.”

“At Siemens Global Business Services, we rely on SingleStore to drive our Pulse platform, which requires us to process massive amounts of data from disparate sources,” said Christoph Malassa, Head of Analytics and Intelligence Solutions, Siemens. “The speed and scalability SingleStore provides has allowed us to better serve both our customers and our internal team, and to expand our capabilities along with them, e.g. enabling online analytics that previously had to be conducted offline.”

The funding comes on the heels of the company’s latest onboarding of its new chief financial officer, Brad Kinnish and today, the company is pleased to welcome Meaghan Nelson as its new general counsel. These two strategic executive hires infuse a great depth of experience to the C-suite, making it even more equipped to explore future paths for company growth.

“I am beyond thrilled to join the team at SingleStore,” said Kinnish. “It’s such an exciting time in the database industry. Major forces such as the rise in cloud and the blending of operational and transactional workloads are causing a third wave of disruption in the way data is managed. SingleStore by design is a leader in the market, and I am confident we will achieve a lot in the coming year.”

SingleStore’s new general counsel, Meaghan Nelson, brings over a decade of legal experience to SingleStore, including her latest role as associate general counsel at SaaS company, Veeva Systems, as well as prior roles in private practice taking companies such as MaxPoint Interactive, Etsy, Natera and Veeva through their IPOs.

“I couldn’t be more excited to join SingleStore at this important inflection point for the company,” said Nelson. “I feel that my deep experience working closely with companies through the IPO process along with my experience in scaling G&A orgs will be of great value to SingleStore as we continue to achieve new heights.”

Previous investments from IBM ventures, HPE and Dell have fueled SingleStore’s strong momentum. It recently launched SingleStoreDB with IBM as well as announced a partnership with SAS to deliver ultra-fast insights at lower costs. The company has almost doubled its headcount in the last 12 months and continues to aggressively hire to meet the demand for its product and services.

This funding follows SingleStore’s recent product release that empowers customers to create fast and interactive applications at scale and in real time. SingleStore will feature and demo these enhancements at a virtual launch event, [r]evolution 2022, tomorrow, July 13. Register and learn more about the event here.

Tue, 12 Jul 2022 09:38:00 -0500 en-US text/html https://sdtimes.com/singlestore-announces-116m-financing-led-by-goldman-sachs-asset-management/
Killexams : SD Times news digest: ShiftLeft Educate, .NET 6 Preview 6, and IBM to acquire Bluetab

ShiftLeft Educate provides security training for developers right in their developer workflow. It provides contextual training for different skill levels. 

Key features include analytics, the ability to select appropriate training resources based on language and vulnerability type, and interactive videos, real world examples, and mitigation information from Kontra.

There is also a paid version of ShiftLeft Educate, which allows customers to roll out, assign, and track completion of training. 

.NET 6 Preview 6

The sixth preview of .NET 6 is now available. According to Microsoft, this is a small release, and the next preview will be much bigger. New features include three new workload commands for discovery and management, TLS support for System.DirectorServices.Protocols, improved sync-over-async performance, and more.

.NET 6 Preview 6 has been tested for and supports Visual Studio 2022 Preview 2, which will allow developers to use new tools in Visual Studio, such as .NET MAUI, Hot Reload for C# apps, and the new Web Live Preview for WebForms. 

More information is available here

IBM to acquire Bluetab to expand data consulting services

According to IBM, Bluetab will become a part of the company’s data services consulting practice. This will help further advance its hybrid cloud and AI strategy. 

“The outside-in digital transformation of the past is giving way to the inside-out potential of using company-owned data with AI and automation to generate business value and create intelligent workflows,” said Mark Foster, senior vice president of IBM Services and Global Business Services. “Our acquisition of Bluetab will fuel migration to the cloud and help our clients to realize even more value from their mission-critical data.”

Thu, 14 Jul 2022 12:01:00 -0500 en-US text/html https://sdtimes.com/security/sd-times-news-digest-shiftleft-educate-net-6-preview-6-and-ibm-to-acquire-bluetab/
Killexams : SingleStore is the latest Data Unicorn with $116M Funding Round

Graphic courtesy of SingleStore.

SingleStore, provider of a popular relational database, has announced a $116 million extended Series F round led by Goldman Sachs. The company reported a nearly $1.3 billion valuation, launching it into unicorn status.

There is a growing demand for flexible data platforms with increasing amounts of data being ingested and more companies launching big data initiatives. SingleStore offers a fully managed elastic cloud database, as well as database SaaS that can facilitate local, distributed, and highly scalable SQL databases. SingleStoreDB was designed with both transactional and analytic capabilities with its real-time streaming and operational and analytical processing. Additionally, users can integrate, monitor, and query data from one or more separate repositories.

This round was led by the growth equity business within Goldman Sachs Asset Management with new participation from Sanabil Investments. Current investors Dell Technologies Capital, GV, Hewlett Packard Enterprise, IBM ventures and Insight Partners, among others, also participated in the round.

“By unifying different types of workloads in a single database, SingleStore supports modern applications, which frequently run real-time analytics on transactional data,” said Holger Staude, managing director at Goldman Sachs. “The company aims to help organizations overcome the challenges of data intensity across multi-cloud, hybrid and on-prem environments, and we are excited to support SingleStore as it enters a new phase of growth.”

The funding round comes at the heels of a major product update in late June including a Wasm-powered code engine and a dbt adapter. For a deep dive into the platform’s capabilities, SingleStore is hosting a virtual launch event tomorrow July 13 at 10:00 a.m. PT called “[r]evolution 2022.” The event will feature product demos, discussions with industry experts, and a live Q&A session. There will also be a keynote focused on real-time analytics from SingleStore CEO Raj Verma.

This infographic shows how transactional and analytical workloads combine into a unified database. Source: SingleStore

“Our purpose is to unify and simplify modern data,” said Verma. “We believe the future is real time, and the future demands a fast, unified and high-reliability database — all aspects in which we are strongly differentiated.”

This seems to have been a busy year for SingleStore so far. The company began 2022 with collaborations with IBM and SAS and recently announced a partnership with Intel in its Disruptor Initiative to optimize the performance of SingleStoreDB on current and future Intel architectures.

The company has also nearly doubled its staff in the past year and is continuing its hiring. Earlier this year, Shireesh Thota joined as SVP of Engineering, alongside Yatharth Gupta became VP of Product Management, and Brad Kinnish was named CFO. Today the company announced Meaghan Nelson as general counsel.

“I couldn’t be more excited to join SingleStore at this important inflection point for the company,” said Nelson. “I feel that my deep experience working closely with companies through the IPO process along with my experience in scaling G&A orgs will be of great value to SingleStore as we continue to achieve new heights.”

Related Items:

Database Firm SingleStore Scores $80M in Series F Funding

SingleStore and Intel Collaborate to Deliver Real-Time Data Technology

Peering Into the Crystal Ball of Advanced Analytics

Mon, 11 Jul 2022 12:01:00 -0500 text/html https://www.datanami.com/2022/07/12/singlestore-is-the-newest-data-unicorn-with-116m-funding-round/
C9020-667 exam dump and training guide direct download
Training Exams List