A former IBM engineer is building open-source tractors for the masses in hopes of changing how food is grown and who grows it.
The founder of Ronnie Baugh Tractors in Paint Rock, Alabama learned an invaluable lesson as a marine in the Vietnam War: If you believe something’s impossible, you’re dead.
Horace Clemmons, now 79, says that advice not only kept him alive in the war; it’s also the driving force behind his current effort to reinvent an industry now dominated by corporate giants like John Deere and AGCO.
"The goal is to try to convince everybody else they can make these things," Clemmons said. "All you need is two jack stands and somebody that can turn a wrench and somebody that can weld, and you're in business making your own tractor."
Clemmons worked for IBM in the 1970s as an early software engineer, but his success came after he left that job and founded the first cash register company to use open-source software.
"I said at the time I left IBM, 'I will create a business unlike IBM,'" Clemmons said. "People will not do business with me because they have to. They will do business with me because they want to."
In a former t-shirt factory surrounded by cotton farms that now export their harvest to China, Clemmons today is trying to bring that same open-source philosophy to tractors.
Everything that goes into a tractor is somewhat cheap and widely available. If someone can’t afford the $22,000 sticker price, Clemmons will sell them the plans to build their own. Its “open system design” is a far cry from the patented and proprietary technology of a John Deere.
The hope is that this novel approach will inspire farmers to innovate and customize the tractor the same way open-source computing and the coders who had access to it sparked the PC revolution that changed the world.
"So IBM started dominating the PC market, but Michael Dell in his dorm room looked at what IBM was doing and said, 'But the customers are asking for something different,'" Clemmons said. "And IBM says, 'No, no, you buy what I got.' And Michael decided, 'Screw that. I'm going to start a business doing what the customer wants done.' Like, what was Dell's revenue last year? 107 or 8 billion. And what was IBM's? 50 something? Okay. If we do the right thing, that'll be John Deere 10 years from now."
Clemmons says his ultimate goal is to supply the millions of impoverished small farmers around the world access to mechanization and the economic empowerment that comes with it. He says the whole concept behind Ronnie Baugh Tractors is to show the world what’s possible. Partners in Uganda, Senegal and the Philippines have already licensed the design and are now building Oggun tractors domestically. They’re even adding two-wheel and fully electric versions to the lineup.
Having access to low cost, highly customizable agricultural equipment that is also open-source technology has the potential to benefit millions of small farmers in the developing world. But in the U.S., the idea is also gaining traction — for very different reasons.
In Tarrytown, New York, the Stone Barns Center for Food and Agriculture is like a living laboratory for sustainable food production. Jack Algiere, the center’s director of agroecology, says he purchased one of Clemmon’s first Oggun tractors because of its versatility.
"The diversity of a farm like this where we're growing literally hundreds of different crops — there is no one piece of equipment, there is no giant thing that we'll use to solve all of our problems," Algiere said. "There are a lot of little instruments in our toolbox, so we want those to be as minimal as possible and as repairable as possible."
Not to mention, there's the ability to tinker with the design. Algiere says he’s made multiple modifications, cutting and welding the frame to raise the floorbed, making room for nearly a dozen different custom tools. All of these changes are shared among the growing Oggun community of small farmers. Farmers, he says, who have been underserved by the agricultural equipment industry since big agriculture all but wiped out the small farm in the 1950s.
"No one goes small anymore," Algiere said. "Once you get small, you go into garden tractor garden equipment, lawn tractors, because that's where the market is. This is an invisible market."
It's an invisible but growing market. Algiere says as climate change wreaks havoc on agriculture, the need to adopt a smaller scale, locally adapted approach to growing food has never been greater.
"It's not about reliving some past or, you know, a fairy tale of what agriculture was, but what our future looks like," Algiere said.
At the Hudson Valley Seed Company in Accord, New York, Steven Crist uses an Oggun tractor to grow more than 70 varieties of crops a year, producing organic, heirloom seeds that are shipped all over the country.
"For us here, it's our finesse tool," Crist said. "When we got this tractor, we were scaling up at the same time, so we built our whole farm around this thing."
Crist says aside from the utility of the tractor, he felt aligned with the philosophy behind it.
"It felt in league with the mission of seed saving, organic seed saving in general, where we're trying to not create borders, not control a patent or a thing," Crist said. "We're trying to proliferate it and supply people access to the ability to do good work in the world. That's kind of mushy, but it's true."
With climate change and an ever-widening global economic divide, Clemmons says he’s determined so see his plans through, even if it takes generations.
"I mean, people tell me I'm crazy, and I agree with them," Clemmons said. "I have a sister who tells me I'm as crazy as she is, and I chuckle and say, 'You're right, but mine is socially beneficial. I am me. I get up every day. Being me."
Clemmons says he would rather fail trying to change a broken system than succeed by following its rules.
It felt like for a long time, the quantum computing industry avoided talking about “quantum advantage” or “quantum supremacy,” the point where quantum computers can solve problems that would simply take too long to solve on classical computers. To some degree, that’s because the industry wanted to avoid the hype that comes with that, but IBM today brought back talk about quantum advantage again by detailing how it plans to use a novel error mitigation technique to chart a path toward running the increasingly large circuits it’ll take to reach this goal — at least for a certain set of algorithms.
It’s no secret that quantum computers hate nothing more than noise. Qubits are fickle things, after all, and the smallest change in temperature or vibration can make them decohere. There’s a reason the current era of quantum computing is associated with “noisy intermediate-scale quantum (NISQ) technology.”
The engineers at IBM and every other quantum computing company are making slow but steady strides toward reducing that noise on the hardware and software level, with IBM’s 65-qubit systems from 2020 now showing twice the coherence time compared to when they first launched, for example. The coherence time of IBM’s transmon superconducting qubits is now over 1 ms.
But IBM is also taking another approach but betting on new error mitigation techniques, dubbed probabilistic error cancellation and zero-noise extrapolation. At a very basic level, you can almost think of this as the quantum equivalent of the active noise cancellation in your headphones. The system regularly checks the system for noise and then essentially inverts those noisy circuits to enable it to create virtually error-free results.
IBM has now shown that this isn’t just a theoretical possibility but actually works in its existing systems. One disadvantage here is that there is quite a bit of overhead when you constantly demo these noisy circuits and that overhead is exponential in the number of qubits and the circuit depths. But that’s a trade-off worth making, argues Jerry Chow, the director of Hardware Development for IBM Quantum.
“Error mitigation is about finding ways to deal with the physical errors in certain ways, by learning about the errors and also just running quantum circuits in such a way that allows us to cancel them,” explained Chow. “In some ways, error correction is like the ultimate error mitigation, but the point is that there are techniques that are more near term with a lot of the hardware that we’re building that already provide this avenue. The one that we’re really excited about is called probabilistic error cancellation. And that one really is a way of trading off runtime — trading off running more circuits in order to learn about the noise that might be inherent to the system that is impacting your calculations.”
The system essentially inserts additional gates into existing circuits to demo the noise inherent in the system. And while the overhead increases exponentially with the size of the system, the IBM team believes it’s a weaker exponential than the best classical methods to estimate those same circuits.
As IBM previously announced, it plans to introduce error mitigation and suppression techniques into its Qiskit Runtime by 2024 or 2025 so developers won’t even have to think about these when writing their code.
Phishing incidents are on the rise. A report from IBM shows that phishing was the most popular attack vector in 2021, resulting in one in five employees falling victim to phishing hacking techniques.
Although technical solutions protect against phishing threats, no solution is 100% effective. Consequently, companies have no choice but to involve their employees in the fight against hackers. This is where security awareness training comes into play.
Security awareness training gives companies the confidence that their employees will execute the right response when they discover a phishing message in their inbox.
As the saying goes, "knowledge is power," but the effectiveness of knowledge depends heavily on how it is delivered. When it comes to phishing attacks, simulations are among the most effective forms of training because the events in training simulations directly mimic how an employee would react in the event of an genuine attack. Since employees do not know whether a suspicious email in their inbox is a simulation or a real threat, the training becomes even more valuable.
It is critical to plan, implement and evaluate a cyber awareness training program to ensure it truly changes employee behavior. However, for this effort to be successful, it should involve much more than just emailing employees. Key practices to consider include:
Because employees do not recognize the difference between phishing simulations and real cyberattacks, it's important to remember that phishing simulations evoke different emotions and reactions, so awareness training should be conducted thoughtfully. As organizations need to engage their employees to combat the ever-increasing attacks and protect their assets, it is important to keep morale high and create a positive culture of cyber hygiene.
Based on years of experience, cybersecurity firm CybeReady has seen companies fall into these common mistakes.
The approach of running a phishing simulation as a test to catch and punish "repeat offenders" can do more harm than good.
An educational experience that involves stress is counterproductive and even traumatic. As a result, employees will not go through the training but look for ways to circumvent the system. Overall, the fear-based "audit approach" is not beneficial to the organization in the long run because it cannot provide the necessary training over an extended period.
Solution #1: Be sensitive
Because maintaining positive employee morale is critical to the organization's well-being, provide positive just-in-time training.
Just-in-time training means that once employees have clicked on a link within the simulated attack, they are directed to a short and concise training session. The idea is to quickly educate the employee on their mistake and supply them essential tips on spotting malicious emails in the future.
This is also an opportunity for positive reinforcement, so be sure to keep the training short, concise, and positive.
Solution #2: Inform relevant departments.
Communicate with relevant stakeholders to ensure they are aware of ongoing phishing simulation training. Many organizations forget to inform relevant stakeholders, such as HR or other employees, that the simulations are being conducted. Learning has the best effect when participants have the opportunity to feel supported, make mistakes, and correct them.
It is important to vary the simulations. Sending the same simulation to all employees, especially at the same time, is not only not instructive but also has no valid metrics when it comes to organizational risk.
The "warning effect" - the first employee to discover or fall for the simulation warns the others. This prepares your employees to respond to the "threat" by anticipating the simulation, thus bypassing the simulation and the training opportunity.
Another negative impact is social desirability bias, which causes employees to over-report incidents to IT without noticing them in order to be viewed more favorably. This leads to an overloaded system and the department IT.
This form of simulation also leads to inaccurate results, such as unrealistically low click-through rates and over-reporting rates. Thus, the metrics do not show the real risks of the company or the problems that need to be addressed.
Solution: Drip mode
Drip mode allows sending multiple simulations to different employees at different times. Certain software solutions can even do this automatically by sending a variety of simulations to different groups of employees. It's also important to implement a continuous cycle to ensure that all new employees are properly onboarded and to reinforce that security is important 24/7 - not just checking a box for minimum compliance.
With over 3.4 billion phishing attacks per day, it's safe to assume that at least a million of them differ in complexity, language, approach, or even tactics.
Unfortunately, no single phishing simulation can accurately reflect an organization's risk. Relying on a single phishing simulation result is unlikely to provide reliable results or comprehensive training.
Another important consideration is that different groups of employees respond differently to threats, not only because of their vigilance, training, position, tenure, or even education level but because the response to phishing attacks is also contextual.
Solution: Implement a variety of training programs
Behavior change is an evolutionary process and should therefore be measured over time. Each training session contributes to the progress of the training. Training effectiveness, or in other words, an accurate reflection of genuine organizational behavior change, can be determined after multiple training sessions and over time.
The most effective solution is to continuously conduct various training programs (at least once a month) with multiple simulations.
It is highly recommended to train employees according to their risk level. A diverse and comprehensive simulation program also provides reliable measurement data based on systematic behavior over time. To validate their efforts at effective training, organizations should be able to obtain a valid indication of their risk at any given point in time while monitoring progress in risk reduction.
Creating such a program may seem overwhelming and time-consuming. That's why we have created a playbook of the 10 key practices you can use to create a simple and effective phishing simulation. Simply download the CybeReady Playbook or meet with one of our experts for a product demo and learn how CybeReady's fully automated security awareness training platform can help your organization achieve the fastest results with virtually zero effort IT.
LAWRENCE, Kan.--(BUSINESS WIRE)--Jul 28, 2022--
Cobalt Iron Inc., a leading provider of SaaS-based enterprise data protection, today announced that the company has been deemed one of the 10 Most Promising IBM Solution Providers 2022 by CIOReview Magazine. The annual list of companies is selected by a panel of experts and members of CIOReview Magazine’s editorial board to recognize and promote innovation and entrepreneurship. A technology partner for IBM, Cobalt Iron earned the distinction based on its Compass ® enterprise SaaS backup platform for monitoring, managing, provisioning, and securing the entire enterprise backup landscape.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220728005043/en/
Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)
According to CIOReview, “Cobalt Iron has built a patented cyber-resilience technology in a SaaS model to alleviate the complexities of managing large, multivendor setups, providing an effectual humanless backup experience. This SaaS-based data protection platform, called Compass, leverages strong IBM technologies. For example, IBM Spectrum Protect is embedded into the platform from a data backup and recovery perspective. ... By combining IBM’s technologies and the intellectual property built by Cobalt Iron, the company delivers a secure, modernized approach to data protection, providing a ‘true’ software as a service.”
Through proprietary technology, the Compass data protection platform integrates with, automates, and optimizes best-of-breed technologies, including IBM Spectrum Protect, IBM FlashSystem, IBM Red Hat Linux, IBM Cloud, and IBM Cloud Object Storage. Compass enhances and extends IBM technologies by automating more than 80% of backup infrastructure operations, optimizing the backup landscape through analytics, and securing backup data, making it a valuable addition to IBM’s data protection offerings.
CIOReview also praised Compass for its simple and intuitive interface to display a consolidated view of data backups across an entire organization without logging in to every backup product instance to extract data. The machine learning-enabled platform also automates backup processes and infrastructure, and it uses open APIs to connect with ticket management systems to generate tickets automatically about any backups that need immediate attention.
To ensure the security of data backups, Cobalt Iron has developed an architecture and security feature set called Cyber Shield for 24/7 threat protection, detection, and analysis that improves ransomware responsiveness. Compass is also being enhanced to use several patented techniques that are specific to analytics and ransomware. For example, analytics-based cloud brokering of data protection operations helps enterprises make secure, efficient, and cost-effective use of their cloud infrastructures. Another patented technique — dynamic IT infrastructure optimization in response to cyberthreats — offers unique ransomware analytics and automated optimization that will enable Compass to reconfigure IT infrastructure automatically when it detects cyberthreats, such as a ransomware attack, and dynamically adjust access to backup infrastructure and data to reduce exposure.
Compass is part of IBM’s product portfolio through the IBM Passport Advantage program. Through Passport Advantage, IBM sellers, partners, and distributors around the world can sell Compass under IBM part numbers to any organizations, particularly complex enterprises, that greatly benefit from the automated data protection and anti-ransomware solutions Compass delivers.
CIOReview’s report concludes, “With such innovations, all eyes will be on Cobalt Iron for further advancements in humanless, secure data backup solutions. Cobalt Iron currently focuses on IP protection and continuous R&D to bring about additional cybersecurity-related innovations, promising a more secure future for an enterprise’s data.”
About Cobalt Iron
Cobalt Iron was founded in 2013 to bring about fundamental changes in the world’s approach to secure data protection, and today the company’s Compass ® is the world’s leading SaaS-based enterprise data protection system. Through analytics and automation, Compass enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture with built-in cybersecurity. Processing more than 8 million jobs a month for customers in 44 countries, Compass delivers modern data protection for enterprise customers around the world. www.cobaltiron.com
Product or service names mentioned herein are the trademarks of their respective owners.
Link to Word Doc:www.wallstcom.com/CobaltIron/220728-Cobalt_Iron-CIOReview_Top_IBM_Provider_2022.docx
Photo Caption: Cobalt Iron Compass ® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection.
Follow Cobalt Iron
View source version on businesswire.com:https://www.businesswire.com/news/home/20220728005043/en/
CONTACT: Agency Contact:
Wall Street Communications
Tel: +1 801 326 9946
Web:www.wallstcom.comCobalt Iron Contact:
VP of Marketing
Tel: +1 785 979 9461
KEYWORD: EUROPE UNITED STATES NORTH AMERICA KANSAS
INDUSTRY KEYWORD: DATA MANAGEMENT SECURITY TECHNOLOGY SOFTWARE NETWORKS INTERNET
SOURCE: Cobalt Iron
Copyright Business Wire 2022.
PUB: 07/28/2022 09:00 AM/DISC: 07/28/2022 09:03 AM
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
IBM is looking to grow its enterprise server business with the expansion of its Power10 portfolio announced today.
IBM Power is a RISC (reduced instruction set computer) based chip architecture that is competitive with other chip architectures including x86 from Intel and AMD. IBM’s Power hardware has been used for decades for running IBM’s AIX Unix operating system, as well as the IBM i operating system that was once known as the AS/400. In more latest years, Power has increasingly been used for Linux and specifically in support of Red Hat and its OpenShift Kubernetes platform that enables organizations to run containers and microservices.
The IBM Power10 processor was announced in August 2020, with the first server platform, the E1080 server, coming a year later in September 2021. Now IBM is expanding its Power10 lineup with four new systems, including the Power S1014, S1024, S1022 and E1050, which are being positioned by IBM to help solve enterprise use cases, including the growing need for machine learning (ML) and artificial intelligence (AI).
Usage of IBM’s Power servers could well be shifting into territory that Intel today still dominates.
Steve Sibley, vp, IBM Power product management, told VentureBeat that approximately 60% of Power workloads are currently running AIX Unix. The IBM i operating system is on approximately 20% of workloads. Linux makes up the remaining 20% and is on a growth trajectory.
IBM owns Red Hat, which has its namesake Linux operating system supported on Power, alongside the OpenShift platform. Sibley noted that IBM has optimized its new Power10 system for Red Hat OpenShift.
“We’ve been able to demonstrate that you can deploy OpenShift on Power at less than half the cost of an Intel stack with OpenShift because of IBM’s container density and throughput that we have within the system,” Sibley said.
Across the new servers, the ability to access more memory at greater speed than previous generations of Power servers is a key feature. The improved memory is enabled by support of the Open Memory Interface (OMI) specification that IBM helped to develop, and is part of the OpenCAPI Consortium.
“We have Open Memory Interface technology that provides increased bandwidth but also reliability for memory,” Sibley said. “Memory is one of the common areas of failure in a system, particularly when you have lots of it.”
The new servers announced by IBM all use technology from the open-source OpenBMC project that IBM helps to lead. OpenBMC provides secure code for managing the baseboard of the server in an optimized approach for scalability and performance.
Among the new servers announced today by IBM is the E1050, which is a 4RU (4 rack unit) sized server, with 4 CPU sockets, that can scale up to 16TB of memory, helping to serve large data- and memory-intensive workloads.
The S1014 and the S1024 are also both 4RU systems, with the S1014 providing a single CPU socket and the S1024 integrating a dual-socket design. The S1014 can scale up to 2TB of memory, while the S1024 supports up to 8TB.
Rounding out the new services is the S1022, which is a 1RU server that IBM is positioning as an ideal platform for OpenShift container-based workloads.
AI and ML workloads are a particularly good use case for all the Power10 systems, thanks to optimizations that IBM has built into the chip architecture.
Sibley explained that all Power10 chips benefit from IBM’s Matrix Match Acceleration (MMA) capability. The enterprise use cases that Power10-based servers can help to support include organizations that are looking to build out risk analytics, fraud detection and supply chain forecasting AI models, among others.
IBM’s Power10 systems support and have been optimized for multiple popular open-source machine learning frameworks including PyTorch and TensorFlow.
“The way we see AI emerging is that a vast majority of AI in the future will be done on the CPU from an inference standpoint,” Sibley said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
IBM’s annual Cost of Data Breach Report for 2022 is packed with revelations, and as usual none of them are good news. Headlining the report is the record-setting cost of data breaches, with the global average now at $4.35 million. The report also reveals that much of that expense comes with the data breach version of “long Covid,” expenses that are realized more than a year after the attack.
Most organizations (60%) are passing these added costs on to consumers in the form of higher prices. And while 83% of organizations now report experiencing at least one data breach, only a small minority are adopting zero trust strategies.
The IBM report draws on input from 550 global organizations surveyed about the period between March 2021 and March 2022, in partnership with the Ponemon Institute.
Though the average cost of a data breach is up, it is only by about 2.6%; the average in 2021 was $4.24 million. This represents a total climb of 13% since 2020, however, reflecting the general spike in cyber crime seen during the pandemic years.
Organizations are also increasingly not opting to absorb the cost of data breaches, with the majority (60%) compensating by raising consumer prices separate from any other latest increases due to inflation or supply chain issues. The report indicates that this may be an underreported upward influence on prices of consumer goods, as 83% of organizations now say that they have been breached at least once.
Brad Hong, Customer Success Manager for Horizon3.ai, sees a potential consumer backlash on the horizon once public awareness of this practice grows: “It’s already a breach of confidence to lose the confidential data of customers, and sure there’s bound to be an organization across those surveyed who genuinely did put in the effort to protect against and curb attacks, but for those who did nothing, those who, instead of creating a disaster recovery plan, just bought cyber insurance to cover the org’s operational losses, and those who simply didn’t care enough to heed the warnings, it’s the coup de grâce to then pass the cost of breaches to the same customers who are now the victims of a data breach. I’d be curious to know what percent of the 60% of organizations who increased the price of their products and services are using the extra revenue for a war chest or to actually reinforce their security—realistically, it’s most likely just being used to fill a gap in lost revenue for shareholders’ sake post-breach. Without government regulations outlining restrictions on passing cost of breach to consumer, at the least, not without the honest & measurable efforts of a corporation as their custodian, what accountability do we all have against that one executive who didn’t want to change his/her password?”
Breach costs also have an increasingly long tail, as nearly half now come over a year after the date of the attack. The largest of these are generally fines that are levied after an investigation, and decisions or settlements in class action lawsuits. While the popular new “double extortion” approach of ransomware attacks can drive long-term costs in this way, the study finds that companies paying ransom demands to settle the problem quickly aren’t necessarily seeing a large amount of overall savings: their average breach cost drops by just $610,000.
Sanjay Raja, VP of Product with Gurucul, expands on how knock-on data breach damage can continue for years: “The follow-up attack effect, as described, is a significant problem as the playbooks and solutions provided to security operations teams are overly broad and lack the necessary context and response actions for proper remediation. For example, shutting down a user or application or adding a firewall block rule or quarantining a network segment to negate an attack is not a sustainable remediation step to protect an organization on an ongoing basis. It starts with a proper threat detection, investigation and response solution. Current SIEMs and XDR solutions lack the variety of data, telemetry and combined analytics to not only identify an attack campaign and even detect variants on previously successful attacks, but also provide the necessary context, accuracy and validation of the attack to build both a precise and complete response that can be trusted. This is an even greater challenge when current solutions cannot handle complex hybrid multi-cloud architectures leading to significant blind spots and false positives at the very start of the security analyst journey.”
In spite of over four out of five organizations now having experienced some sort of data breach, only slightly over 20% of critical infrastructure companies have moved to zero trust strategies to secure their networks. Cloud security is also lagging as well, with a little under half (43%) of all respondents saying that their security practices in this area are either “early stage” or do not yet exist.
Those that have onboarded security automation and AI elements are the only group seeing massive savings: their average cost of data breach is $3.05 million lower. This particular study does not track average ransom demands, but refers to Sophos research that puts the most latest number at $812,000 globally.
The study also notes serious problems with incident response plans, especially troubling in an environment in which the average ransomware attack is now carried out in four days or less and the “time to ransom” has dropped to a matter of hours in some cases. 37% of respondents say that they do not test their incident response plans regularly. 62% say that they are understaffed to meet their cybersecurity needs, and these organizations tend to suffer over half a million more dollars in damages when they are breached.
Of course, cost of data breaches is not distributed evenly by geography or by industry type. Some are taking much bigger hits than others, reflecting trends established in prior reports. The health care industry is now absorbing a little over $10 million in damage per breach, with the average cost of data breach rising by $1 million from 2021. And companies in the United States face greater data breach costs than their counterparts around the world, at over $8 million per incident.
Shawn Surber, VP of Solutions Architecture and Strategy with Tanium, provides some insight into the unique struggles that the health care industry faces in implementing effective cybersecurity: “Healthcare continues to suffer the greatest cost of breaches but has among the lowest spend on cybersecurity of any industry, despite being deemed ‘critical infrastructure.’ The increased vulnerability of healthcare organizations to cyber threats can be traced to outdated IT systems, the lack of robust security controls, and insufficient IT staff, while valuable medical and health data— and the need to pay ransoms quickly to maintain access to that data— make healthcare targets popular and relatively easy to breach. Unlike other industries that can migrate data and sunset old systems, limited IT and security budgets at healthcare orgs make migration difficult and potentially expensive, particularly when an older system provides a small but unique function or houses data necessary for compliance or research, but still doesn’t make the cut to transition to a newer system. Hackers know these weaknesses and exploit them. Additionally, healthcare orgs haven’t sufficiently updated their security strategies and the tools that manufacturers, IT software vendors, and the FDA have made haven’t been robust enough to thwart the more sophisticated techniques of threat actors.”
Familiar incident types also lead the list of the causes of data breaches: compromised credentials (19%), followed by phishing (16%). Breaches initiated by these methods also tended to be a little more costly, at an average of $4.91 million per incident.
Though the numbers are never as neat and clean as averages would indicate, it would appear that the cost of data breaches is cut dramatically for companies that implement solid automated “deep learning” cybersecurity tools, zero trust systems and regularly tested incident response plans. Mature cloud security programs are also a substantial cost saver.
The guides leverage Astadia’s 25+ years of expertise in partnering with organizations to reduce costs, risks and timeframes when migrating their IBM mainframe applications to cloud platforms
BOSTON, August 03, 2022--(BUSINESS WIRE)--Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.
As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.
"Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation," said Scott G. Silk, Chairman and CEO. "More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations," said Mr. Silk.
The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.
In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.
In each of the IBM Mainframe Reference Architecture white papers, readers will explore:
Benefits, approaches, and challenges of mainframe modernization
Understanding typical IBM Mainframe Architecture
An overview of Azure/AWS/Google Cloud/Oracle Cloud
Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud
How to ensure project success in mainframe modernization
The guides are available for get here:
To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.
Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220803005031/en/
Wilson Rains, Chief Revenue Officer
Prasanna Burri began his career as a mechanical engineer in India but had a secret passion for IT. His endeavour to get into the technology space led him to learn about enterprise application platforms and he eventually started his ERP career at IBM. Later he joined SAP Labs in the US where he was immersed in product management for cloud technology. Since 2013 he’s been overseeing the Dangote Group’s IT operations across Africa.
In a big group like Dangote, how challenging is it to manage IT in all parts of the company?
It’s a complex organisation with a lot of diversified business lines and regions. We embrace proven technologies like the Microsoft platform for endpoint management servers, active directory, email and endpoint protection. It was a modest start but today we spend almost 10 times more than in 2014 on the Azure Cloud subscriptions. Almost all our applications run in the cloud and I can say about 95% of our operations run in different cloud environments, whether SAP or Microsoft. We continue to expand to implement some near processes and go through continuous improvement cycles, and we’re a certified SAP Center of Excellence. We also have a very strong hybrid cloud infrastructure and a dedicated, in-house talent base that’s open to embracing newer technologies like AI and ML.
Can you describe the importance of cloud technologies across Africa today?
There’s an increasing appetite even though infrastructure scaling is time-consuming in terms of logistics bringing in equipment. A lot of good talent from Africa is migrating to greener pastures, especially in the last few years. With cloud technologies, though, companies can scale despite talent shortages in the region, and support tech talent can also be found anywhere in the world. We ultimately don’t need to suffer from the latency of acquisitions of equipment. It’s a big shot in the arm, especially in the environment we operate in where we can scale fast and where we have more visibility and control over what’s happening due to remote management options and features available with these platforms. It’s also necessary to have a hybrid system set up in case of any large-scale disruption even though they’re rare. Cloud is the way to go but it depends on the industry and the region.
With cloud technologies come the opportunity to implement AI and ML. How is the company taking advantage of this?
We have at least three use cases we’ve been working on: logistics, which is fleet management for our trucks; the master data management and data clean-up, which AI can do a better job; and where we can run optical character recognition (OCR) automatically on vendor invoices using Microsoft cloud Power Platform, along with ML services. We’re also trying to leverage the capabilities of AI and ML in security on the Azure platform, where Microsoft endpoint manager and Intune orchestrate security of servers and endpoints, as well as mobile devices, across the group from the cloud.
What is the state of connectivity that strings all these technologies together?
We don’t see as many disruptions and downtime for business solely due to network connectivity. It’s a lot less nowadays than maybe five years ago. There’s also a continuous growth in bandwidth. A lot of times, there are issues in the last mile. The trunk routes are generally okay, but there’s still always room for growth and optimisation, and there’s reasonable capacity, especially in urban regions with the advent of some newer technologies like Starlink. I expect that in a year or two, we’ll start seeing a greater prevalence of connectivity in the remote parts as well.
What other challenges do you face when implementing these technologies in the company?
Technology in itself is never a problem. The hardest part is acquiring talent. Sometimes you don’t find engineering talent or the talent you have has matured, which may leave you short because they’ve found better opportunities in the West or Middle East, for instance. Then there’s user adoption and change management in processes and new technologies. Those are the two challenges that revolve around implementing new technologies.
How then do you find talent and screen them for suitability?
Most of the time, we hire people based on their attitude and knowledge, and in some cases also for their experience. We use recruiting tools for job board postings, and online assessment tools to pick qualified people based on the job specification. Then in some cases, they might even receive some additional assignments to ensure that the aptitude is there.
How big is the ICT team at Dangote Group?
We are close to 150 personnel and a good part of our team belongs to endpoint security and last-mile tech support. We have one of the leanest shops from that aspect, but we’re looking to hire more local talent and be more resilient due to the changing pressure of acquiring talent from the marketplace. We also have a constant flurry of training from original equipment manufacturers (OEMs) and subscriptions for LinkedIn Learning for the majority of our information work staff.
What would be your parting shot to other Africa-based enterprises looking to adopt cloud, AI and machine learning?
The primary goal is to sustain and enable businesses to operate efficiently with certain proven innovations. Also, the target is to expand the presence of the organisation in the market, and keep customers happy. IT experts should know the goal of the business before adopting technology. They can think through challenges like how to make sure the dispatch operations run without stopping, ensure there’s adequate disaster resilience, and that end users are being productive with such tools and services. Successful IT leaders have a consultancy and advisory approach. They understand the needs of the business and can conceive solutions and relay them in a way that gets the buy-in from the leadership.
Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.
If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware.
If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices.
Another way to prevent getting this page in the future is to use Privacy Pass. Check out the browser extension in the Chrome Web Store.