But IBM has always carved its own path. For example, the Armonk, NY-based company doesn’t use the term “Great Resignation,” at least internally. Of course, that doesn’t mean the tech giant isn’t aware of the nationwide talent shortage and the highly competitive labor market that’s resulted.
“This is a time to ensure we re-engage our population,” Louissaint says. “By nature of my title, my goal is to continue to transform and pivot our company toward being more growth-minded, transforming it directly through leadership: leadership development, getting people in the right jobs and ensuring we have the right succession plans.”
Like many companies since the COVID-19 pandemic, IBM has relied upon its business resource groups – its label for employee resource groups (ERGs) – to maintain and even boost retention. Traditionally, ERGs consist of employees who volunteer their time and effort to foster an inclusive workplace. Due to their motivations, needs and the general nature of ERG work, employees who lead these groups are more likely to be Black, Indigenous and People of Color (BIPOC) and oftentimes women. ERGs are a way for underrepresented groups to band together to recruit more talent like them into their companies and make sure that talent feels supported and gets promoted.
“It’s a lot easier to leave a company where you’ve only interacted with colleagues through a screen,” Louissaint says. “Our diversity groups and communities have gotten a lot stronger, which builds commitment to the company and community to each other. We’ve found that through our communities, business resource groups, open conversations and by democratizing leadership by using virtual technologies like Slack, the company has become smaller and the interactions are a lot more personal.”
A major contributor to the Great Resignation has been the push for workers to return to the office. While Apple and Google have ruffled feathers with requesting employees back for at least a couple days a week, Tesla went one step further by demanding employees head to the office five days a week, as if the COVID-19 pandemic never happened.
Ahead of the game, IBM was one of the first major tech firms to embrace remote work, with as much as 40% of its workforce at home during the 2000s. A shift came in 2017, but since the pandemic, only 20% of the company’s U.S. employees are in the office for three days a week or more, according to IBM CEO Arvind Krishna. In June, Krishna added that he doesn’t think the balance will ever get back to more than 60% of workers in the office.
“We’ve always been defined by flexibility, even prior to the pandemic that’s what we were known for and what differentiated us,” Louissaint says. “Continuing to double down on flexibility has been a value to us and to our people.”
IBM has also been defined by its eye toward the future, particularly when it comes to workforce development. Over the past decade, the tech giant has partnered with educational institutions, non-governmental organizations and other companies to discover and nurture talent from untapped pools and alternative channels. Last year, the company vowed to train 30 million individuals on technical skills by 2030.
“Our people crave learning and are highly curious,” Louissaint says, adding that the average IBM employee consumes about 88 hours of learning through its platform each year. Nearly all (95%) employees are on the platform in any given quarter.
“We’ve been building a strong learning environment where employees can build new skills and drive toward new jobs and experiences,” he says. “We also find that the individuals who consume the most learning are more likely to get promoted. It’s 30% more likely for a super learner to be promoted or switch jobs, so the incentive is continued growth and opportunity for advancement.”
I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.
Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.
Edge In, not Cloud Out
In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.
A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.
“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.
IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.
IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).
IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.
It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.
Why edge is important
Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.
Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.
Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.
IBM at the Edge
In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.
Example #1 – McDonald’s drive-thru
Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.
McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.
Example #2 – Boston Dynamics and Spot the agile mobile robot
According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Excellerate future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.
To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.
IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.
IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.
IBM market opportunities
Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.
Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.
Challenges with scaling
“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”
Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.
Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.
IBM AI entry points at the edge
IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.
IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.
There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.
Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Excellerate quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.
For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:
Maximo Application Suite
IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.
IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.
Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.
Day-2 AI Operations (retraining and scaling)
Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.
IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.
A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).
“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”
Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.
Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.
The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.
Data Fabric Extensions to Hub and Spokes
IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.
In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.
Multicloud and Edge platform
In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.
For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.
Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.
Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.
First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).
Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.
Telco network intelligence and slice management with AL/ML
Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:
The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.
An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.
5G network slicing and slice management
Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.
5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.
Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.
Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”
In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:
Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.
5G radio access
Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.
O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.
The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:
IBM Cloud and Infrastructure
The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.
IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.
As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).
Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.
IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.
IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.
Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.
Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.
However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.
It is reassuring that IBM has a plan and that its plan is sound.
Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
Interested, he talked to a counselor to learn more about P-TECH, an early college program where he could earn an associate’s degree along with his high school diploma. Liking the sound of the program, he enrolled in the inaugural P-TECH class as a freshman at Longmont’s Skyline High School.
“I really loved working on computers, even before P-TECH,” he said. “I was a hobbyist. P-TECH gave me a pathway.”
IBM hired him as a cybersecurity analyst once he completed the apprenticeship.
“P-TECH has given me a great advantage,” he said. “Without it, I would have been questioning whether to go into college. Having a college degree at 18 is great to put on a resume.”
Litow’s idea was to get more underrepresented young people into tech careers by giving them a direct path to college while in high school — and in turn create a pipeline of employees with the job skills businesses were starting to value over four-year college degrees.
The program, which includes mentors and internships provided by business partners, gives high school students up to six years to earn an associate's degree at no cost.
In Colorado, St. Vrain Valley was among the first school districts chosen by the state to offer a P-TECH program after the Legislature passed a bill to provide funding — and the school district has embraced the program.
Colorado’s first P-TECH programs started in the fall of 2016 at three high schools, including Skyline High. Over the last six years, 17 more Colorado high schools have adopted P-TECH, for at total of 20. Three of those are in St. Vrain Valley, with a fourth planned to open in the fall of 2023 at Longmont High School.
Each St. Vrain Valley high school offers a different focus supported by different industry partners.
Skyline partners with IBM, with students earning an associate’s degree in Computer Information Systems from Front Range. Along with being the first, Skyline’s program is the largest, enrolling up to 55 new freshmen each year.
Programs at the other schools are capped at 35 students per grade.
Frederick High’s program, which started in the fall of 2019, has a bioscience focus, partners with Aims Community College and works with industry partners Agilent Technologies, Tolmar, KBI Biopharma, AGC Biologics and Corden Pharma.
Silver Creek High’s program started a year ago with a cybersecurity focus. The Longmont school partners with Front Range and works with industry partners Seagate, Cisco, PEAK Resources and Comcast.
The new program coming to Longmont High will focus on business.
District leaders point to Skyline High’s graduation statistics to illustrate the program’s success. At Skyline, 100 percent of students in the first three P-TECH graduating classes earned a high school diploma in four years.
For the 2020 Skyline P-TECH graduates, 24 of the 33, or about 70 percent, also earned associate’s degrees. For the 2021 graduating class, 30 of the 47 have associate’s degrees — with one year left for those students to complete the college requirements.
For the most latest 2022 graduates, who have two years left to complete the college requirements, 19 of 59 have associate’s degrees and another six are on track to earn their degrees by the end of the summer.
Louise March, Skyline High’s P-TECH counselor, keeps in touch with the graduates, saying 27 are working part time or full time at IBM. About a third are continuing their education at a four year college. Of the 19 who graduated in 2022 with an associate’s degree, 17 are enrolling at a four year college, she said.
Two of those 2022 graduates are Anahi Sarmiento, who is headed to the University of Colorado Boulder’s Leeds School of Business, and Jose Ivarra, who will study computer science at Colorado State University.
“I’m the oldest out of three siblings,” Ivarra said. “When you hear that someone wants to give you free college in high school, you take it. I jumped at the opportunity.”
Sarmiento added that her parents, who are immigrants, are already working two jobs and don’t have extra money for college costs.
“P-TECH is pushing me forward,” she said. “I know my parents want me to have a better life, but I want them to have a better life, too. Going into high school, I kept that mentality that I would push myself to my full potential. It kept me motivated.”
While the program requires hard work, the two graduates said, they still enjoyed high school and had outside interests. Ivarra was a varsity football player who was named player of the year. Sarmiento took advantage of multiple opportunities, from helping elementary students learn robotics to working at the district’s Innovation Center.
Ivarra said he likes that P-TECH has the same high expectations for all students, no matter their backgrounds, and gives them support in any areas where they need help. Spanish is his first language and, while math came naturally, language arts was more challenging.
“It was tough for me to see all these classmates use all these big words, and I didn’t know them,” he said. “I just felt less. When I went into P-TECH, the teachers focus on you so much, checking on every single student.”
They said it’s OK to struggle or even fail. Ivarra said he failed a tough class during the pandemic, but was able to retake it and passed. Both credited March, their counselor, with providing unending support as they navigated high school and college classes.
“She’s always there for you,” Sarmiento said. “It’s hard to be on top of everything. You have someone to go to.”
Students also supported each other.
“You build bonds,” Ivarra said. “You’re all trying to figure out these classes. You grow together. It’s a bunch of people who want to succeed. The people that surround you in P-TECH, they push you to be better.”
P-TECH has no entrance requirements or prerequisite classes. You don’t need to be a top student, have taken advanced math or have a background in technology.
With students starting the rigorous program with a wide range of skills, teachers and counselors said, they quickly figured out the program needed stronger support systems.
March said freshmen in the first P-TECH class struggled that first semester, prompting the creation of a guided study class. The every other day, hour-and-a-half class includes both study time and time to learn workplace skills, including writing a resume and interviewing. Teachers also offer tutoring twice a week after school.
“The guided study has become crucial to the success of the program,” March said.
Another way P-TECH provides extra support is through summer orientation programs for incoming freshmen.
At Skyline, ninth graders take a three-week bridge class — worth half a credit — that includes learning good study habits. They also meet IBM mentors and take a field trip to Front Range Community College.
“They get their college ID before they get their high school ID,” March said.
During a session in June, 15 IBM mentors helped the students program a Sphero robot to travel along different track configurations. Kathleen Schuster, who has volunteered as an IBM mentor since the P-TECH program started here, said she wants to “return some of the favors I got when I was younger.”
“Even this play stuff with the Spheros, it’s teaching them teamwork and a little computing,” she said. “Hopefully, through P-TECH, they will learn what it takes to work in a tech job.”
Incoming Skyline freshman Blake Baker said he found a passion for programming at Trail Ridge Middle and saw P-TECH as a way to capitalize on that passion.
“I really love that they give you options and a path,” he said.
Trail Ridge classmate Itzel Pereyra, another programming enthusiast, heard about P-TECH from her older brother.
“It’s really good for my future,” she said. “It’s an exciting moment, starting the program. It will just help you with everything.”
While some of the incoming ninth graders shared dreams of technology careers, others see P-TECH as a good foundation to pursue other dreams.
Skyline incoming ninth grader Marisol Sanchez wants to become a traveling nurse, demonstrating technology and new skills to other nurses. She added that the summer orientation sessions are a good introduction, helping calm the nerves that accompany combining high school and college.
“There’s a lot of team building,” she said. “It’s getting us all stronger together as a group and introducing everyone.”
Silver Creek’s June camp for incoming ninth graders included field trips to visit Cisco, Seagate, PEAK Resources, Comcast and Front Range Community College.
During the Front Range Community College field trip, the students heard from Front Range staff members before going on a scavenger hunt. Groups took photos to prove they completed tasks, snapping pictures of ceramic pieces near the art rooms, the most expensive tech product for sale in the bookstore and administrative offices across the street from the main building.
Emma Horton, an incoming freshman, took a cybersecurity class as a Flagstaff Academy eighth grader that hooked her on the idea of technology as a career.
“I’m really excited about the experience I will be getting in P-TECH,’ she said. “I’ve never been super motivated in school, but with something I’m really interested in, it becomes easier.”
Deb Craven, dean of instruction at Front Range’s Boulder County campus, promised the Silver Creek students that the college would support them. She also gave them some advice.
“You need to advocate and ask for help,” she said. “These two things are going to help you the most. Be present, be engaged, work together and lean on each other.”
Craven, who oversees Front Range’s P-TECH program partnership, said Front Range leaders toured the original P-TECH program in New York along with St. Vrain and IBM leaders in preparation for bringing P-TECH here.
“Having IBM as a partner as we started the program was really helpful,” she said.
When the program began, she said, freshmen took a more advanced technology class as their first college class. Now, she said, they start with a more fundamental class in the spring of their freshman year, learning how to build a computer.
“These guys have a chance to grow into the high school environment before we stick them in a college class,” she said.
Summer opportunities aren’t just for P-TECH’s freshmen. Along with summer internships, the schools and community colleges offer summer classes.
Silver Creek incoming 10th graders, for example, could take a personal financial literacy class at Silver Creek in the mornings and an introduction to cybersecurity class at the Innovation Center in the afternoons in June.
Over at Skyline, incoming 10th graders in P-TECH are getting paid to teach STEM lessons to elementary students while earning high school credit. Students in the fifth or sixth year of the program also had the option of taking computer science and algebra classes at Front Range.
And at Frederick, incoming juniors are taking an introduction to manufacturing class at the district's Career Elevation and Technology Center this month in preparation for an advanced manufacturing class they’re taking in the fall.
“This will give them a head start for the fall,” said instructor Chester Clark.
Incoming Frederick junior Destini Johnson said she’s not sure what she wants to do after high school, but believes the opportunities offered by P-TECH will prepare her for the future.
“I wanted to try something challenging, and getting a head start on college can only help,” she said. “It’s really incredible that I’m already halfway done with an associate’s degree and high school.”
IBM P-TECH program manager Tracy Knick, who has worked with the Skyline High program for three years, said it takes a strong commitment from all the partners — the school district, IBM and Front Range — to make the program work.
“It’s not an easy model,” she said. “When you say there are no entrance requirements, we all have to be OK with that and support the students to be successful.”
IBM hosted 60 St. Vrain interns this summer, while two Skyline students work as IBM “co-ops” — a national program — to assist with the P-TECH program.
The company hosts two to four formal events for the students each year to work on professional and technical skills, while IBM mentors provide tutoring in algebra. During the pandemic, IBM also paid for subscriptions to tutor.com so students could get immediate help while taking online classes.
“We want to get them truly workforce ready,” Knick said. “They’re not IBM-only skills we’re teaching. Even though they choose a pathway, they can really do anything.”
As the program continues to expand in the district, she said, her wish is for more businesses to recognize the value of P-TECH.
“These students have had intensive training on professional skills,” she said. “They have taken college classes enhanced with the same digital credentials that an IBM employee can learn. There should be a waiting list of employers for these really talented and skilled young professionals.”
©2022 the Daily Camera (Boulder, Colo.). Distributed by Tribune Content Agency, LLC.
Phishing incidents are on the rise. A report from IBM shows that phishing was the most popular attack vector in 2021, resulting in one in five employees falling victim to phishing hacking techniques.
Although technical solutions protect against phishing threats, no solution is 100% effective. Consequently, companies have no choice but to involve their employees in the fight against hackers. This is where security awareness training comes into play.
Security awareness training gives companies the confidence that their employees will execute the right response when they discover a phishing message in their inbox.
As the saying goes, "knowledge is power," but the effectiveness of knowledge depends heavily on how it is delivered. When it comes to phishing attacks, simulations are among the most effective forms of training because the events in training simulations directly mimic how an employee would react in the event of an real attack. Since employees do not know whether a suspicious email in their inbox is a simulation or a real threat, the training becomes even more valuable.
It is critical to plan, implement and evaluate a cyber awareness training program to ensure it truly changes employee behavior. However, for this effort to be successful, it should involve much more than just emailing employees. Key practices to consider include:
Because employees do not recognize the difference between phishing simulations and real cyberattacks, it's important to remember that phishing simulations evoke different emotions and reactions, so awareness training should be conducted thoughtfully. As organizations need to engage their employees to combat the ever-increasing attacks and protect their assets, it is important to keep morale high and create a positive culture of cyber hygiene.
Based on years of experience, cybersecurity firm CybeReady has seen companies fall into these common mistakes.
The approach of running a phishing simulation as a test to catch and punish "repeat offenders" can do more harm than good.
An educational experience that involves stress is counterproductive and even traumatic. As a result, employees will not go through the training but look for ways to circumvent the system. Overall, the fear-based "audit approach" is not beneficial to the organization in the long run because it cannot provide the necessary training over an extended period.
Solution #1: Be sensitive
Because maintaining positive employee morale is critical to the organization's well-being, provide positive just-in-time training.
Just-in-time training means that once employees have clicked on a link within the simulated attack, they are directed to a short and concise training session. The idea is to quickly educate the employee on their mistake and give them essential tips on spotting malicious emails in the future.
This is also an opportunity for positive reinforcement, so be sure to keep the training short, concise, and positive.
Solution #2: Inform relevant departments.
Communicate with relevant stakeholders to ensure they are aware of ongoing phishing simulation training. Many organizations forget to inform relevant stakeholders, such as HR or other employees, that the simulations are being conducted. Learning has the best effect when participants have the opportunity to feel supported, make mistakes, and correct them.
It is important to vary the simulations. Sending the same simulation to all employees, especially at the same time, is not only not instructive but also has no valid metrics when it comes to organizational risk.
The "warning effect" - the first employee to discover or fall for the simulation warns the others. This prepares your employees to respond to the "threat" by anticipating the simulation, thus bypassing the simulation and the training opportunity.
Another negative impact is social desirability bias, which causes employees to over-report incidents to IT without noticing them in order to be viewed more favorably. This leads to an overloaded system and the department IT.
This form of simulation also leads to inaccurate results, such as unrealistically low click-through rates and over-reporting rates. Thus, the metrics do not show the real risks of the company or the problems that need to be addressed.
Solution: Drip mode
Drip mode allows sending multiple simulations to different employees at different times. Certain software solutions can even do this automatically by sending a variety of simulations to different groups of employees. It's also important to implement a continuous cycle to ensure that all new employees are properly onboarded and to reinforce that security is important 24/7 - not just checking a box for minimum compliance.
With over 3.4 billion phishing attacks per day, it's safe to assume that at least a million of them differ in complexity, language, approach, or even tactics.
Unfortunately, no single phishing simulation can accurately reflect an organization's risk. Relying on a single phishing simulation result is unlikely to provide reliable results or comprehensive training.
Another important consideration is that different groups of employees respond differently to threats, not only because of their vigilance, training, position, tenure, or even education level but because the response to phishing attacks is also contextual.
Solution: Implement a variety of training programs
Behavior change is an evolutionary process and should therefore be measured over time. Each training session contributes to the progress of the training. Training effectiveness, or in other words, an accurate reflection of real organizational behavior change, can be determined after multiple training sessions and over time.
The most effective solution is to continuously conduct various training programs (at least once a month) with multiple simulations.
It is highly recommended to train employees according to their risk level. A diverse and comprehensive simulation program also provides reliable measurement data based on systematic behavior over time. To validate their efforts at effective training, organizations should be able to obtain a valid indication of their risk at any given point in time while monitoring progress in risk reduction.
Creating such a program may seem overwhelming and time-consuming. That's why we have created a playbook of the 10 key practices you can use to create a simple and effective phishing simulation. Simply download the CybeReady Playbook or meet with one of our experts for a product demo and learn how CybeReady's fully automated security awareness training platform can help your organization achieve the fastest results with virtually zero effort IT.
I believe that the last two decades in enterprise computing has been the prequel to the main act to follow. In this main act, the winners will be enterprises willing to change, to question everything, to leverage the latest in digital innovation to scale the impact of AI, Hybrid Cloud and automation on every aspect of their business.
The Covid pandemic disrupted business-as-usual for most companies, and several spined to digital technology, containing AI, to sustain operations. Earlier this year, IBM launched a study that revealed the size of the AI skills gap across Europe that found the tech sector is struggling to find employees with adequate AI knowledge or experience. The research found nearly 7 in 10 tech job seekers and tech employees believe that potential recruits lack the skills necessary for a career in AI. The impact of this deficit has the potential to stifle digital innovation and hold back economic growth.
Mind the gap
The IBM report, ‘Addressing the AI Skills Gap in Europe’, exposed a worrying shortfall in skills required for a career in AI. Although technical capabilities are vital for a career in the sector, problem solving is considered the most critical soft skill needed for tech roles among all survey participants (up to 37%). However, around a quarter of tech recruiters (23%) have difficulty finding applicants with this aptitude along with shortfalls in critical and strategic thinking. Along with soft skills, 40% of tech job seekers and employees noted that software engineering and knowledge of programming languages are the most important technical capabilities for the AI/tech workforce to have.
How to address the issue
As AI moves into the mainstream, specialist tech staff are working more closely than ever with business managers. In order to secure the best possible outcomes, the soft skills of interpersonal communication, strategic problem solving, and critical thinking are required across all disciplines to help ensure the most beneficial personal interactions. Demonstrating these skills can greatly Excellerate employability and career developments in AI.
The report showed that offering education and skills training is seen as a top priority for many companies looking to Excellerate AI recruitment in the future. As a result, IBM have already taken proactive steps to help applicants and employees enhance their AI skills.
IBM launched IBM SkillsBuild, which brings together two world-class, skills-based learning programs—"Open P-TECH" and "SkillsBuild"—under one umbrella. Through the program, students, educators, job seekers, and the organisations that support them have access to free digital learning, resources, and support focused on the core technology and workplace skills needed to succeed in jobs. SkillsBuild is a free programme which contains an AI skills module for secondary education students and adults seeking entry-level employment.
Further concerted effort
A great deal remains to be done to solve this skills gap. However, I believe we can agree that a solution is achievable. What’s required now is for industry, government and academia to work together to put existing ideas into practice and to think of new ways to solve the challenge. At the start of the year, the DCMS announced £23 million of government funding to create 2,000 scholarships in AI and data science in England. The new scholarships from this funding will ensure more people can build successful careers in AI, create and develop new and bigger businesses, and will Excellerate the diversity of this growing and innovative sector. I hope to see further investment and programs such as ours with SkillsBuild as key drivers in change. Finding solutions and initiatives such as these will ensure we are providing a significant boost for the UK while providing a rewarding career for many.
This article was authored by Sreeram Visvanathan, Chief Executive of IBM UK and Ireland
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
IBM is looking to grow its enterprise server business with the expansion of its Power10 portfolio announced today.
IBM Power is a RISC (reduced instruction set computer) based chip architecture that is competitive with other chip architectures including x86 from Intel and AMD. IBM’s Power hardware has been used for decades for running IBM’s AIX Unix operating system, as well as the IBM i operating system that was once known as the AS/400. In more latest years, Power has increasingly been used for Linux and specifically in support of Red Hat and its OpenShift Kubernetes platform that enables organizations to run containers and microservices.
The IBM Power10 processor was announced in August 2020, with the first server platform, the E1080 server, coming a year later in September 2021. Now IBM is expanding its Power10 lineup with four new systems, including the Power S1014, S1024, S1022 and E1050, which are being positioned by IBM to help solve enterprise use cases, including the growing need for machine learning (ML) and artificial intelligence (AI).
Usage of IBM’s Power servers could well be shifting into territory that Intel today still dominates.
Steve Sibley, vp, IBM Power product management, told VentureBeat that approximately 60% of Power workloads are currently running AIX Unix. The IBM i operating system is on approximately 20% of workloads. Linux makes up the remaining 20% and is on a growth trajectory.
IBM owns Red Hat, which has its namesake Linux operating system supported on Power, alongside the OpenShift platform. Sibley noted that IBM has optimized its new Power10 system for Red Hat OpenShift.
“We’ve been able to demonstrate that you can deploy OpenShift on Power at less than half the cost of an Intel stack with OpenShift because of IBM’s container density and throughput that we have within the system,” Sibley said.
Across the new servers, the ability to access more memory at greater speed than previous generations of Power servers is a key feature. The improved memory is enabled by support of the Open Memory Interface (OMI) specification that IBM helped to develop, and is part of the OpenCAPI Consortium.
“We have Open Memory Interface technology that provides increased bandwidth but also reliability for memory,” Sibley said. “Memory is one of the common areas of failure in a system, particularly when you have lots of it.”
The new servers announced by IBM all use technology from the open-source OpenBMC project that IBM helps to lead. OpenBMC provides secure code for managing the baseboard of the server in an optimized approach for scalability and performance.
Among the new servers announced today by IBM is the E1050, which is a 4RU (4 rack unit) sized server, with 4 CPU sockets, that can scale up to 16TB of memory, helping to serve large data- and memory-intensive workloads.
The S1014 and the S1024 are also both 4RU systems, with the S1014 providing a single CPU socket and the S1024 integrating a dual-socket design. The S1014 can scale up to 2TB of memory, while the S1024 supports up to 8TB.
Rounding out the new services is the S1022, which is a 1RU server that IBM is positioning as an ideal platform for OpenShift container-based workloads.
AI and ML workloads are a particularly good use case for all the Power10 systems, thanks to optimizations that IBM has built into the chip architecture.
Sibley explained that all Power10 chips benefit from IBM’s Matrix Match Acceleration (MMA) capability. The enterprise use cases that Power10-based servers can help to support include organizations that are looking to build out risk analytics, fraud detection and supply chain forecasting AI models, among others.
IBM’s Power10 systems support and have been optimized for multiple popular open-source machine learning frameworks including PyTorch and TensorFlow.
“The way we see AI emerging is that a vast majority of AI in the future will be done on the CPU from an inference standpoint,” Sibley said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
“The first code for Neo4j and the property graph database was written in IIT Bombay”, said the chief Marketing Officer at Neo4j, Chandra Rangan.
In an exclusive interview with Analytics India Magazine, Rangan said that the database was structured and sketched on a napkin on a flight to Bombay by an intern, alongside Emil Eifrem—who is the founder and CEO of Neo4j—where they worked together to create the first code for its graph database platform.
Rangan joined Neo4j as the chief marketing officer (CMO) on May 10, 2022. Prior to this, he worked at Google, running Google Cloud Platform product marketing and, more recently, product-led growth, strategy, and operations for Google Maps Platform. Rangan has over two decades of technology infrastructure experience across marketing leadership, strategy, and operations at Hewlett Packard Enterprise, Gartner, Symantec, McKinsey, and IBM.
Founded in 2007, Neo4j has more than 700 employees globally. In June 2022, the company raised about $325 million in a Series F funding round led by Eurazeo, alongside participation from GV (formerly Google Ventures) and other existing investors like One Peak, Creandum, Greenbridge Partners, DTCP, and Lightrock.
This is one of the largest investments in a private database company. It raised Neo4j’s valuation to over $2 billion. In contrast, even bigger than MongoDB, which raised a total of $311 million, and post-IPO, it raised about $192 million in IPO, making it worth $1.2 billion.
With its latest funding round, Neo4j is looking to invest in expanding its footprint globally, and India is one of its top choices, thanks to a larger developer ecosystem, alongside a burgeoning startup ecosystem and IT service providers using its platform to offer solutions to global customers.
Neo4j’s community edition, which is open source, is widely adopted by developers in the country. “We have an overall community of almost a quarter million users who are familiar with our platform”, said Rangan, explaining that it has one of the largest developers in the country. With the fresh infusion of funds, the company looks to tap into the market, expand its services, sales and support, and invest in the right strategies going forward.
As part of its expansion plans, Neo4j started hiring in sales leadership and country manager roles from last year onwards and would also continue that momentum this year. “This is a big bet for us in multiple ways”, added Rangan, pointing at its Indian root and all the innovations in the country.
Besides India, Neo4j has a strong presence in Silicon Valley and Sweden and has a huge developer ecosystem in the US, China, Europe, South East Asia and others.
Over the years, Neo4j has grown through developers and some of the early adopters of its platform. “Unfortunately, developers interested in graph databases will typically start with us”, said Rangan affirmatively.
Further, explaining the conversion cycle, he said that once they know about graph databases, they later join the community edition. Then, once they get comfortable with the use cases and start putting this into production, they eventually get into a paid version for the advanced security, support, scalability, and commercial constructs.
“In India, that’s the similar motion we are seeing”, said Rangan. He revealed that they already have a huge developer community. Banking on this community, they plan to invest in continuing the engagement with the community in a meaningful way.
Of late, the company has also started hiring several community leaders to encourage proactive engagement within the community. In addition, it is also investing heavily in sales and marketing engines, including technical sales, which work closely with organisations in building the use cases, alongside the implementation of services and support.
One thing that makes Neo4j stand apart from other players is its intuitiveness in helping deploy applications faster because of its flexible schema. This helps developers to add properties, nodes, and more. “It gives tremendous flexibility for developers so they can get to the outcome much more quickly”, said Rangan.
But what about the learning curve? Rangan said, “Literally, for a new developer, if they start learning graphs for the first time, it is very intuitive.” He explained that the learning curve is not that steep and doesn’t take long. “But, for folks who have been working in the development space and building applications and are very familiar and comfortable with RDBMS, i.e., rows and tables. Strangely enough, the learning curve is a little higher and steeper”, added Rangan, discussing that they have to unlearn to model intuitively versus modelling tables. He said the best way to overcome that learning curve is to try it out.
“So, when you think about the learning curve, it is a very easy learning curve, especially if you can put aside the former way of thinking about things like rows and tables and go back to first principles.”—Chandra Rangan.
The International Consortium of Investigative Journalists (ICIJ) released the full list of companies and individuals in the Panama Papers, implicating at least 140 politicians from more than 50 countries in tax evasion schemes. The journalist used Neo4j to draw the relationship with their data and found common touchpoints and names of people involved in having multiple offshore accounts and evading tax.
“We believe a whole bunch of sectors can actually get value. We have seen new sectors kind of pop up on a pretty regular basis”, said Rangan while citing various use cases in financial service sectors (fraud detection), healthcare (vaccine distribution), pharmaceuticals (drug discovery), supply chain and logistics (mapping automation), tech companies (managing IT networks), retail (recommendation systems), and more.
Chandra Rangan further explained that people are still discovering what they can use graph databases for and how useful it is in some sense. He said that it is unleashing a whole bunch of innovations. “So, we are hoping for a lot of that to happen here in India because of the developer community”, he added.
Rangan said Neo4j would be aggressively investing in the community and ecosystem here in India. Besides this, he said they are investing in building a marketing and sales team, which has grown significantly in the last year. In addition, Neo4j is also investing in building a partner ecosystem to support a wider range of customers.
“Depending on how quickly we can grow or cannot grow—again, responsible growth—we want to grow as fast as possible. But, we also want to make sure as we hire people as we establish the relationship, we are investing enough time, effort, and money to make sure that these relationships are successful”, concluded Rangan.
Autism is known as a spectrum disorder because every autistic person is different, with unique strengths and challenges.
Varney says many autistic people experienced education as a system that focused on these challenges, which can include social difficulties and anxiety.
He is pleased this is changing, with latest reforms embracing autistic students’ strengths.
But the unemployment rate of autistic people remains disturbingly high. ABS data from 2018 shows 34.1 per cent of autistic people are unemployed – three times higher than that of people with any type of disability and almost eight times that of those without a disability.
“A lot of the time people hear that someone’s autistic and they assume incompetence,” says Varney, who was this week appointed the chair of the Victorian Disability Advisory Council.
“But we have unique strengths, specifically hyper focus, great creativity, and we can think outside the box, which is a great asset in workplaces.”
In Israel, the defence force has a specialist intelligence unit made up exclusively of autistic soldiers, whose skills are deployed in analysing, interpreting and understanding satellite images and maps.
Locally, organisations that actively recruit autistic talent include software giant SAP, Westpac, IBM, ANZ, the Australian Tax Office, Telstra, NAB and PricewaterhouseCoopers.
Chris Pedron is a junior data analyst at Australian Spatial Analytics, a social enterprise that says on its website “neurodiversity is our advantage – our team is simply faster and more precise at data processing”.
He was hired after an informal chat. (Australian Spatial Analytics also often provides interview questions 48 hours in advance.)
Pedron says the traditional recruitment process can work against autistic people because there are a lot of unwritten social cues, such as body language, which he doesn’t always pick up on.
“If I’m going in and I’m acting a bit physically standoffish, I’ve got my arms crossed or something, it’s not that I’m not wanting to be there, it’s just that new social interaction is something that causes anxiety.”
Pedron also finds eye contact uncomfortable and has had to train himself over the years to concentrate on a point on someone’s face.
Australian Spatial Analytics addresses a skills shortage by delivering a range of data services that were traditionally outsourced offshore.
Projects include digital farm maps for the grazing industry, technical documentation for large infrastructure and map creation for land administration.
Pedron has always found it easy to map things out in his head. “A lot of the work done here at ASA is geospatial so having autistic people with a very visual mindset is very much an advantage for this particular job.”
Pedron listens to music on headphones in the office, which helps him concentrate, and stops him from being distracted. He says the simpler and clearer the instructions, the easier it is for him to understand. “The less I have to read between the lines to understand what is required of me the better.”
Australian Spatial Analytics is one of three jobs-focused social enterprises launched by Queensland charity White Box Enterprises.
It has grown from three to 80 employees in 18 months and – thanks to philanthropist Naomi Milgrom, who has provided office space in Cremorne – has this year expanded to Melbourne, enabling Australian Spatial Analytics to create 50 roles for Victorians by the end of the year.
Chief executive Geoff Smith hopes they are at the front of a wave of employers recognising that hiring autistic people can make good business sense.
“Rather than focus on the deficits of the person, focus on the strengths. A quarter of National Disability Insurance Scheme plans name autism as the primary disability, so society has no choice – there’s going to be such a huge number of people who are young and looking for jobs who are autistic. There is a skills shortage as it is, so you need to look at neurodiverse talent.”
In 2017, IBM launched a campaign to hire more neurodiverse (a term that covers a range of conditions including autism, Attention Deficit Hyperactivity Disorder, or ADHD, and dyslexia) candidates.
The initiative was in part inspired by software and data quality engineering services firm Ultranauts, who boasted at an event “they ate IBM’s lunch at testing by using an all-autistic staff”.
The following year Belinda Sheehan, a senior managing consultant at IBM, was tasked with rolling out a pilot at its client innovation centre in Ballarat.
“IBM is very big on inclusivity,” says Sheehan. “And if we don’t have diversity of thought, we won’t have innovation. So those two things go hand in hand.”
Sheehan worked with Specialisterne Australia, a social enterprise that assists businesses in recruiting and supporting autistic people, to find talent using a non-traditional recruitment process that included a week-long task.
Candidates were asked to work together to find a way for a record shop to connect with customers when the bricks and mortar store was closed due to COVID.
Ten employees were eventually selected. They started in July 2019 and work in roles across IBM, including data analysis, testing, user experience design, data engineering, automation, blockchain and software development. Another eight employees were hired in July 2021.
Sheehan says clients have been delighted with their ideas. “The UX [user experience] designer, for example, comes in with such a different lens. Particularly as we go to artificial intelligence, you need those different thinkers.”
One client said if they had to describe the most valuable contribution to the project in two words it would be “ludicrous speed”. Another said: “automation genius.”
IBM has sought to make the office more inclusive by creating calming, low sensory spaces.
It has formed a business resource group for neurodiverse employees and their allies, with four squads focusing on recruitment, awareness, career advancement and policies and procedures.
And it has hired a neurodiversity coach to work with individuals and managers.
Sheehan says that challenges have included some employees getting frustrated because they did not have enough work.
“These individuals want to come to work and get the work done – they are not going off for a coffee and chatting.”
Increased productivity is a good problem to have, Sheehan says, but as a manager, she needs to come up with ways they can enhance their skills in their downtime.
There have also been difficulties around different communication styles, with staff finding some autistic employees a bit blunt.
Sheehan encourages all staff to do a neurodiversity 101 training course run by IBM.
“Something may be perceived as rude, but we have to turn that into a positive. It’s good to have someone who is direct, at least we all know what that person is thinking.”
Chris Varney is delighted to see neurodiversity programs in some industries but points out that every autistic person has different interests and abilities.
Some are non-verbal, for example, and not all have the stereotypical autism skills that make them excel at data analysis.
“We’ve seen a big recognition that autistic people are an asset to banks and IT firms, but there’s a lot more work to be done,” Varney says.
“We need to see jobs for a diverse range of autistic people.”
The Morning Edition newsletter is our guide to the day’s most important and interesting stories, analysis and insights. Sign up here.
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
As we move deeper into 2022, almost every company is feeling the cyberskills gap to some degree. Now with the cyber workforce gap hitting 2.72 million, it’s unsurprising that IBM research recently found that 83% of organizations have had more than one data breach.
With the workforce gap showing no sign of closing, training is becoming critical for employees to teach cybersecurity professionals the skills they need to thrive amid today’s complex threat landscape.
As the cyberskills gap continues to grow, more and more organizations are recognising the need to use training — rather than hiring — to fix the shortage.
“Studies continue to show that a cybersecurity staffing shortage is placing organizations at risk, and the skills shortage and its associated impacts have not improved over the past few years,” said Kevin Hanes, CEO of Cybrary, a cybersecurity skills training platform.
“Products and technology will not help solve this fundamental issue; rather, investing in people is key to narrowing the cybersecurity skills gap and helping to combat increasing burnout and human error,” Hanes said.
Hanes says that Cybrary is aiming to address these challenges by providing cybersecurity practitioners with the “right training at the right time” to equip them to respond to modern threats.
It does this by providing them with a platform they can use to access learning materials and prepare for professional certifications with scenario-based training and over 1,900 learning activities.
Cybrary is competing against a range of cybersecurity training providers that offer online, in-person training and boot camps. The provider sits loosely within the global IT training market, which researchers valued at $68 billion in 2020, and estimate will reach a value of $97.6 billion by 2026.
One of Cybrary’s competitors is Pluralsight, which offers a mixture of courses, skill-assessments labs, and hands-on learning developed by industry experts on subjects such as Microsoft Azure Deployment, AWS Operations and Ruby Language Fundamentals.
Pluralsight most recently announced raising $430.4 million in revenue in 2020.
Another competitor is Infosec, a cybersecurity training and security awareness training provider with over 2,000 resources, including over 1,400 cybersecurity courses and cyber ranges, and live boot camps with instructor-led training. According to Zoominfo, Infosec has raised $31 million in revenue.
However, Hanes argues that Cybrary differentiates itself from other solutions on the market by offering up-to-date learning material at a lower price point.
“Cybrary’s platform allows individuals and teams to skill up on their own time from anywhere in the world. And with the Cybrary Threat Intelligence Group (CTIG) and SMEs developing new content in real time, Cybrary users can be confident that we are providing them with high-quality training that covers the latest threats and vulnerabilities impacting the industry.”
Today, Cybrary announced it has raised $25 million as part of a series C funding round, bringing its total funding to $48 million following a $19 billion series B funding round in 2019.
The organization intends to use the funding to enhance its R&D across engineering, product and marketing teams, while growing the capabilities of the Cybrary Threat Intelligence Group.
More broadly, the funding highlights that investors are looking to security training as a potential solution to bridge the cyberskills gap.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.