Ensure your success with HPE6-A42 boot camp

Passing the HPE6-A42 exam is not sufficient if you want to really perform in the field. You ought to have adequate HPE6-A42 information that will improve your situation in the commonsense field. We extraordinarily concentrate to further developing your insight about HPE6-A42 goals with our HPE6-A42 actual test questions and replies with VCE practice tests

Exam Code: HPE6-A42 Practice test 2022 by Killexams.com team
HPE6-A42 Implementing Aruba WLAN (IAW) 8

This course teaches the knowledge, skills and practical experience required to set up and configure a basic Aruba WLAN utilizing the OS 8.X architecture and features. Using lecture and labs, this course provides the technical understanding and hands-on experience of configuring a single Mobility Master with one controller and AP Aruba WLAN. Participants will learn how to use Aruba hardware and ArubaOS to install and build a complete, secure controller network with multiple SSIDs. This course provides the underlying material required to prepare candidates for the Aruba Certified Mobility Associate (ACMA) V8 certification exam.

Topics
WLAN Fundamentals
Describes the fundamentals of 802.11, RF frequencies and channels
Explain RF Patterns and coverage including SNR
Roaming Standards and QOS requirements
Mobile First Architecture
An introduction to Aruba Products including controller types and modes
OS 8.X Architecture and features
License types and distribution
Mobility MasterMobility Controller Configuration
An introduction to Aruba Products including controller types and modes
OS 8.X Architecture and features
License types and distribution
Secure WLAN configuration
Identifying WLAN requirements such as SSID name, encryption, authentication
Explain AP groups structure and profiles
Configuration of WLAN using the Mobility Master GUI
AP Provisioning
Describes the communication between AP and Mobility controller
Explain the AP booting sequence and requirements
Explores the APs controller discovery mechanisms
Explains how to secure AP to controller communication using CPSec
Describes AP provisioning and operations
WLAN Security
Describes the 802.11 discovery, authentication and association
Explores the various authentication methods, 802.1x with WPA/WPA2, Mac auth
Describes the authentication server communication
Explains symmetric vs asymmetric Keys, encryption methods
WIPS is described along with rogue discovery and protection
Firewall Roles and Policies
An introduction into Firewall Roles and policies
Explains Arubas Identity based Firewall
Configuration of Policies and Rules including aliases
Explains how to assign Roles to users
Dynamic RF Management
Explain how ARM calibrates the network selecting channels and power settings
Explores the new OS 8.X Airmatch to calibrate the network
How Client match steers clients to better APs
Guest Access
Introduces Arubas solutions for Guest Access and the Captive portal process
Configuration of secure guest access using the internal Captive portal
The configuration of Captive portal using Clearpass and its benefits
Creating a guest provisioning account
Troubleshooting guest access
Network Monitoring and Troubleshooting
Using the MM dashboard to monitor and diagnose client, WLAN and AP issues
Traffic analysis using APPrf with filtering capabilities
A view of Airwaves capabilities for monitoring and diagnosing client, WLAN and AP issues

Objectives
After you successfully complete this course, expect to be able to:
Explain how Aruba's wireless networking solutions meet customers requirements
Explain fundamental WLAN technologies, RF concepts,and 802.11 Standards
Learn to configure theMobility Masterand Mobility Controller to control access to the Employee and Guest WLAN
Control secure access to the WLAN using ArubaFirewall Policies and Roles
Recognize and explain Radio Frequency Bands and channels,and the standards used to regulate them
Describe the concept of radio frequency coverage and interference and successful implementation and diagnosis of WLAN systems
Identify and differentiate antennatechnology options to ensure optimal coverage in various deployment scenarios
Describe RF power technology including, signal strength, how it is measured and why it is critical in designing wireless networks
Learn to configure and optimize Aruba ARM and Client Match features
Learn how to perform network monitoringfunctions and troubleshooting

Implementing Aruba WLAN (IAW) 8
HP Implementing techniques
Killexams : HP Implementing techniques - BingNews https://killexams.com/pass4sure/exam-detail/HPE6-A42 Search results Killexams : HP Implementing techniques - BingNews https://killexams.com/pass4sure/exam-detail/HPE6-A42 https://killexams.com/exam_list/HP Killexams : Human Resources: How to Develop a Training Intervention Program

Human resources departments typically have responsibility for handling recruiting, hiring, performance, compensation, benefits and career development. Situations requiring a training intervention usually have to do with a performance, conduct or behavior issue. Developing a training intervention program involves assessing the need, designing materials, developing training presentations and exercises, implementing the program and evaluating the success of the program.

  1. Analyze the problem you need to solve. Determine if the problem can be mitigated or eliminated by providing instruction that enables students to gain new knowledge, acquire new skills and learn how to use creative methods of problem-solving. Conduct a needs assessment to find out what the managers and participants want or expect from the training intervention. Identify the target audience and list any distinctive characteristics about them. For example, interview employees or run focus groups to determine what steps need to be taken to transition from the current state to the desired state.

  2. Design your training program by identifying the learning objectives. List what the participants should comprehend or remember after the session is over, act upon once training is complete or skills participants should be able to perform or explain. Ideally, all your learning objectives align with your company’s strategic goals.

  3. Develop the training materials. For example, to support instructors in classroom or distance learning lectures, create presentations for each lesson. Each lesson should consist of an introduction, a statement about the learning objectives, definitions, examples and a summary. Exercises should be associated with a learning objective and be strategically placed throughout the learning experience. Specify how long each exercise should take.

  4. Implement the training intervention. Make sure you have leadership sponsorship of your initiative. Upon completion, participants should be able to take action and apply what they learned in your training intervention program to get results. Monitor the outcomes. If the intended outcome is a transactional change, people typically continue their current way of working but apply tips and techniques learned to do it faster or cheaper. A transitional change occurs when new methods tried successfully by other people are attempted by your organization. A transformational change, when new untried methods are undertaken, usually occurs over a longer time.

  5. Evaluate the training program. Interview participants three to six months after the training ends to determine the impact. Adjust your methods and techniques to capitalize on the most successful outcomes. If you find resistance to changes at the beginning, allow people time to adjust to the ways of working. Identify shortcomings in the training presentations and suggest new exercises or case studies to facilitate learning. Offer incentives and rewards to encourage the desired behaviors.

Sun, 22 Jul 2018 19:37:00 -0500 en-US text/html https://smallbusiness.chron.com/human-resources-develop-training-intervention-program-2022.html
Killexams : Modular Design and Construction Transform Building Processes 3 Aug, 2022 By: Andrew G. Roe

Civil Engineering: AEC industry applies manufacturing-borne techniques to accelerate construction.

Cadalyst Civil Engineering Column

With construction project teams continually seeking to gain efficiency, some AEC firms are implementing techniques more commonly found in manufacturing. Leveraging digital technology, AEC professionals are modularizing design and construction to break projects into smaller pieces that can be reused across projects and programs.

The modular approach has been used for decades in automotive, aerospace, and other industries, but has historically found limited use in construction. Unlike cars and airplanes, most construction projects are unique, distinguished by site conditions, owner preferences, and local permitting requirements. But creative AEC professionals are finding new ways to envision and execute projects, looking for repeatable elements in large, complex projects such as data centers, multi-unit residential complexes, and nursing care facilities.  

“It’s taking manufacturing ideas and applying them into construction to solve logistic and supply chain issues,” said Marty Rozmanith, AEC sales strategy director at Dassault Systèmes, the technology company that developed CATIA for the aerospace industry and now provides products and solutions across multiple industries. The modular approach can address construction industry challenges such as labor shortages and increasingly tight schedules required by owners, added Rozmanith.

Data from Multiple Sources

Dassault Systèmes’ 3D EXPERIENCE platform, a collaborative, cloud-based environment that enables data sharing, is aiding the modularization of numerous AEC projects. While often used in conjunction with CATIA, SOLIDWORKS, and other Dassault Systèmes products, the platform can also incorporate data from multiple sources and software providers. “The platform is agnostic about where the data comes from,” said Rozmanith.

> Modularized design and construction often features data from multiple sources. Image source: Dassault Systèmes.

Modularized design and construction often features data from multiple sources. Image source: Dassault Systèmes. Click image to enlarge.

As an example of modular design, Rozmanith cited a bathroom module incorporated into student housing complexes with multiple bathrooms. Dassault Systèmes customer Bouygues Construction imported prefabricated bathroom pods designed in SOLIDWORKS into a dormitory layout developed in Autodesk’s Revit. The bathroom components were then connected to a separate piping module developed in CATIA.

Modules such as the bathroom pod on the right can be incorporated into design models such as the dormitory layout on the left.

Modules such as the bathroom pod on the right can be incorporated into design models such as the dormitory layout on the left. Image source: Dassault Systèmes. Click image to enlarge.

Taking the modular concept further, AEC teams can also incorporate behavioral data into digital models to run simulations on building equipment such as plumbing and heating, ventilation, and air-conditioning (HVAC) systems. Such simulations can help determine how equipment performs under various operating conditions and help owners make operation and maintenance (O&M) decisions.

The significance of real-world simulations has led Dassault Systèmes to use the term “virtual twin” instead of “digital twin” when referring to buildings or systems that incorporate behavioral data. Whereas a digital twin is often understood to be a digital representation of a real-world entity, a virtual twin combines visualization, modeling, and simulation in a more comprehensive manner, according to Rozmanith. “A virtual twin is a virtual copy of the physical world and is predictive,” he said. “It has knowledge and context and behaves like the real system behaves.”

AEC Professionals on Board

Industry practitioners see the modular approach playing a key role in completing projects faster and more efficiently. “[AEC] is inevitably becoming more industrialized,” said John Cerone, principal at New York City-based SHoP Architects and lead technological advisor at Assembly OSM, a modular construction startup founded in 2019 by the founders of SHoP Architects. “There’s a huge amount of efficiency to be had in designing and manufacturing — to operate more like industrialized processes,” Cerone noted.

Cerone has seen the modular concept used successfully on a growing number of projects during the last decade. His first significant experience with modular design was on the Barclays Center Arena in Brooklyn, a project initiated in 2009 and completed just three years later. Without the modular approach, Cerone estimates the project would have taken five years. The project team maintained a digital approach throughout the project, from concept design to final plans, and simultaneously developed designs and exchanged information with suppliers and contractors. The team used Dassault Systèmes’ Enterprise Knowledge Language (EKL) to automate processes and “create a lot of work quickly with a small group,” said Cerone.

Since completion of the Barclays Center project, SHoP Architects has applied the modular approach on numerous other projects, such as the corporate headquarters for Atlassian, a Sydney, Australia-based software company. When completed in 2025, the 590’ -tall Atlassian tower will be the tallest commercial hybrid timber tower in the world, according to SHoP. The design, developed by SHoP and Austrailian partner BVN, features six discrete but interconnected “habitats,” with each four-level habitat a freestanding mass-timber construction supported within a steel exoskeleton. A naturally ventilated zone, akin to an outdoor garden, is located at each level.

The Atlassian tower is slated to be the tallest commercial hybrid timber tower in the world. Image source: SHoP Architects.

The Atlassian tower is slated to be the tallest commercial hybrid timber tower in the world. Image source: SHoP Architects. Click image to enlarge.

The modular design of the Atlassian tower features interconnected habitats and outdoor gardens located at each level of the building

The modular design of the Atlassian tower features interconnected habitats and outdoor gardens located at each level of the building. Image source: SHoP Architects. Click image to enlarge.

Other AEC professionals have seen the modular approach particularly effective on data centers and other complex projects. Alex Kunz, principal at A.G. KUNZ, a U.S.-based company that specializes in integrated design and construction, has helped clients accelerate project delivery through model-based design, analytics, and project production management.

On a exact data center project for a large technology company, Kunz’s firm was asked to help modularize designs to increase throughput and accelerate construction. “It was a very tightly integrated single facility that housed all of its functions within one facility,” said Kunz. “The scale of the cloud industry was growing so fast that they needed to build data centers faster than their existing supply chain could.”

Kunz and other project partners developed a modular system architecture, essentially disintegrating a tightly integrated system into a set of modules that could be managed independently and built more systematically. To do so, the team subdivided the project into separate systems, or sub-facilities, that could be reassembled into the overall facility. “[With data center design largely driven by power distribution], the major modular effort was in aggregating the power distribution and cooling systems to support configurability and scaling of the systems,” he said.

By grouping subsystems, the project team simplified designs and enabled components to be reused, not only on the initial facility, but across multiple projects, according to Kunz. “The key benefit in the case of the data centers comes at the program level, he said.

While working at multiple levels for clients, Kunz sees different types of digital models. At the project level, building information modeling (BIM) is commonly used to develop what’s sometimes called a design intent model. A separate production model tells the team how and when to build the product or facility of interest. And throughout the course of a project or program, integration of data from multiple sources is key. “We leverage interoperability between the design intent and the production model,” said Kunz.

Modular design typically requires multiple models at different stages of project and program development. Image source: AG KUNZ. Click image to enlarge.

Modular design typically requires multiple models at different stages of project and program development. Image source: AG KUNZ. Click image to enlarge.

Kunz’s firm, which includes mechanical engineers, architects, and software developers, has used Dassault Systèmes’ tools to drive what they call computer-aided production engineering (CAPE). The team works closely with manufacturing integrators to develop processes upstream of individual product designs. “The approach allows us to concurrently design products and their associated manufacturing and installation processes.”

In addition to using CATIA and SOLIDWORKS, the team uses Dassault Systèmes’ DELMIA to model and simulate system operations. In-house software developers build custom tools to support production management. Whether using commercial or in-house tools, the firm focuses on five levers of production management: product design, process design, capacity, inventory, and variability. “Conventional construction management is focused on schedule and cost as the weapons to control quality,” said Kunz. “We find those inevitably don't work very well, so we're focused on these five levers that are more commonly applied in aerospace and automotive industries.”

Kunz’s firm also works closely with the Project Production Institute (PPI) as part of its industry council and adopts methods from the MIT System Architecture Group to further the cause of modular design and construction. The PPI is an independent organization that seeks to Improve the value that engineering and construction provide to the economy and to society. The MIT group is a member of MIT's Engineering Systems Laboratory, focused on studying the early-stage technical decisions that affect system performance.

Time will tell how modular design and construction catch on in the AEC industry. The concept is not new, but the increasing role of digital tools, such as virtual twins, interactive design review, real-time ray tracing, and other visualization improvements, may enable the approach to be used on a wider variety of projects.

Tue, 02 Aug 2022 22:30:00 -0500 text/html https://www.cadalyst.com/cad/aec/modular-design-and-construction-transform-building-processes-79652
Killexams : How 3D printing will transform manufacturing in 2020 and beyond

Design News caught up with Paul Benning, chief technologist for HP 3D Printing & Digital Manufacturing to get an idea of where additive manufacturing is headed in the future. Benning explained that we’re headed for mixed-materials printing, surfaces innovation, more involvement from academic community, and greater use of software and data management.

Automated assembly with mixed materials

Benning believes we will begin to see automated assembly with industries seamlessly integrating multi-part assemblies including combinations of 3D printed metal and plastic parts.  “There’s not currently a super printer that can do all things intrinsically, like printing metal and plastic parts, due to factors such as processing temperatures,” Benning told Design News. “However, as automation increases, there’s a vision from the industry for a more automated assembly setup where there is access to part production from both flavors of HP technology: Multi Jet Fusion and Metal Jet.”

While the medical industry and recently aerospace are incorporated 3D printing into production, Benning also sees car makers as a future customer for additive. “The auto sector is a great example of where automated assembly could thrive on the factory floor.”

Benning sees a wide range of applications that might combine metal and plastics. “Benefits of an automated assembly for industrial applications include printing metals into plastic parts, building parts that are wear-resistant and collect electricity, adding surface treatments, and even building conductors or motors into plastic parts,” said Benning. “The industry isn’t ready to bring this technology to market just yet, but it’s an example of where 3D printing is headed beyond 2020.”

Surfaces will become an area of innovation

Benning sees a future where data payloads for 3D printed parts will be coded into the surface texture.  “It’s a competitive advantage to be able to build interesting things onto surfaces. HP has experimented with coding digital information into a surface texture. By encoding information into the texture itself, manufacturers can have a bigger data payload than just the serial number.”

He notes that the surface coding could be read by, humans for machines. “One way to tag a part either overtly or covertly is to make sure that both people and machines are able to read it based on the shape or orientation of the bumps. We have put hundreds of copies of a serial number spread across the surface of a part so that it’s both hidden and universally apparent.”

Benning sees this concept as p[art of the future of digital manufacturing. “This is one of our inventions that serves to tie together our technologies with the future of parts tracking and data systems,” said Benning.

Universities will introduce new ways to thinking

Benning believes that academia and training programs can offer new thought processes to liberate designers from old thinking and allow them to tap into technologies of the future. “3D printing’s biggest impact to manufacturing job skills lie on the design side,” said Benning. “You have a world of designers who have been trained in and grown up with existing technologies like injection molding. Because of this, people unintentionally bias their design toward legacy processes and away from technologies like 3D printing.”

Benning believes one solution for breaking old thinking is to train upcoming engineers in new ways of thinking. “To combat this, educators of current and soon-to-be designers must adjust the thought process that goes into designing for production given the new technologies in the space,” said Benning. “We recognize this will take some time, particularly for universities that are standing up degree programs.” He also believes new software design tools will guide designers to make better use of 3D printing in manufacturing.

Software and data management is critical to the 3D printing future

Benning believes advancements in software and data management will drive improved system management and part quality. This will then lead to better customer outcomes. “Companies within the industry are creating API hooks to build a fluid ecosystem for customers and partners,” said Benning.

HP is beginning to use data to enable ideal designs and optimized workflows for Multi Jet Fusion factories. “This data comes from design files, or mobile devices, or things like HP’s FitStation scanning technology and is applied to make production more efficient, and to better deliver individualized products purpose-built for their end customers.” The goal of that individualized production can support custom products build with mass production manufacturing techniques, leading to a batch-of-one or mass customization.

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other Topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

January 28-30: North America's largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

Tue, 26 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/automation-motion-control/how-3d-printing-will-transform-manufacturing-2020-and-beyond
Killexams : Functional Printing Market Key Player, Applications And Business Opportunities till 2027 |Impact of COVID-19

Market Overview: 

In the past decade, the printing industry has seen a lot of development and evolution of technology in the electronics market. Notably, there is more demand for printed electronic materials for wearable electronics, IoT devices, and medical sensors among others. Functional printing, multi-functional printing, and 3D printing are some of the advanced printing techniques that are in high demand across the world owing to its ability to develop fully printed functional devices. Applications of functional printing include RFID tags for inventory control and drug packaging that monitors and communicates patient compliance, interactive product packaging, among others. Many manufacturers have started adopting functional printing to print electronic parts and components owing to its various benefits in terms of design, cost, and material utilization.

Functional printing market is expected to grow from USD 10.52 billion in 2018 to USD 30.27 billion by 2024, at a compound annual growth rate (CAGR) of 19.26% during the forecast period.

Get Free trial Report @ https://www.marketresearchfuture.com/sample_request/7978

Key Players

The prominent players in the functional printing market are HP Development Company, L.P. (US), Haiku Tech (US), Avery Dennison Corporation (US), BASF SE (Germany), Blue Spark Technologies (US), Display Corporation (US), E Ink Holdings Inc. (Taiwan), Eastman Kodak Company Ltd (US), Enfucell Oy (France) and GSI Technologies LLC (US).

Other player in Functional Printing Market are Isorg (France), Mark Andy Inc. (US), Nanosolar Inc. (US), Novaled AG (Germany), Optomec Inc. (US), Toppan Forms Co. Ltd (Japan), Toyo Ink Sc Holding Co. Ltd (US), Vorbeck Materials Corporation (US), Xennia Technology Limited (UK), Xaar PLC (UK) among others.

Functional Printing Market – Segmentation

Global functional printing market has been segmented on the basis of material, printing technology, application, and region.

Based on the material, the market has been segmented into substrates and inks. The substrates segment has been further segmented into glass, plastic, paper, silicon carbide, gallium nitride (GAN), and others. The inks segment has been sub-segmented into conductive inks, graphene ink, dielectric inks, and others.

On the basis of printing technology, the market has been segmented into inkjet printing, screen printing, gravure printing, flexography, and others.

Based on the application, the market has been segmented into sensors, displays, lighting, batteries, photovoltaics, RFID tags, others.

By region, the market has been segmented into North America, Europe, Asia-Pacific, and the rest of the world.

Global Functional Printing Market – Regional Analysis

The global market for functional printing is estimated to grow at a significant rate during the forecast period from 2019 to 2024. The geographical analysis of functional printing market has been done across North America, Europe, Asia-Pacific, and the rest of the world.

North America is dominating the market owing to a surge in demand for near-field communication (NFC) and early adoption of new technologies by the region. 3D printing is being highly adopted by the countries in this region, pushing the manufacturers to change their business models and supply chains through distributed 3D printing. Countries such as Canada and the US possess an established technical infrastructure which helps in adoption and implementation of advanced technologies. Asia-Pacific is expected to be the fastest growing region in the coming years owing to the presence of large printing companies supplying electronics and environment materials, films, and interior decor materials. Also, the presence of many giant electronic companies that uses printing as part of the manufacturing process of membrane switches, circuitry, tags,  displays, and photovoltaics is expected to enhance the growth of functional printing market in the forecast period. Furthermore, South America, and the Middle East and Africa are expected to show considerable growth in the functional printing market during the forecast period.

Get Complete Report @ https://www.marketresearchfuture.com/reports/functional-printing-market-7978

About Market Research Future:

Market Research Future (MRFR) is a global market research firm that takes great pleasure in its services, providing a detailed and reliable study of diverse industries and consumers worldwide. MRFR’s methodology integrates proprietary information with different data sources to provide the client with a comprehensive understanding of the current key trends, upcoming events, and the steps to be taken based on those aspects.

Our rapidly expanding market research company is assisted by a competent team of research analysts who provide useful analytics and data on technological and economic developments. Our deemed analysts make industrial visits and collect valuable information from influential market players. Our main goal is to keep our clients informed of new opportunities and challenges in various markets. We offer step-by-step assistance to our valued clients through strategic and consulting services to reach managerial and actionable decisions.

Media Contact:

Market Research Future (Part of Wantstats Research and Media Private Limited)

99 Hudson Street, 5Th Floor

New York, NY 10013

United States of America

+1 628 258 0071 (US)

+44 2035 002 764 (UK)

Email: [email protected]

Website: https://www.marketresearchfuture.com

Tue, 26 Jul 2022 18:29:00 -0500 Market Research Future en-US text/html https://www.digitaljournal.com/pr/functional-printing-market-key-player-applications-and-business-opportunities-till-2027-impact-of-covid-19
Killexams : Medic in heels commands respect on Ukraine's front lines

DONETSK REGION, Ukaine (AP) — All over the Donetsk region, close to the front lines of Russia’s war in Ukraine, Nataliia Voronkova turns up at Ukrainian field positions and hospitals wearing high heels. A colleague bought her running shoes, but Voronkova gave them away.

A helmet and a protective vest aren't part of her uniform, either, as she distributes first-aid kits and other equipment to Ukrainian soldiers and paramedics. She is a civilian, the founder of a medical non-profit, and looking like one is something no one can take from her, even in a combat zone.

“I am myself, and I will never give up my heels for anything,” Voronkova said of the red strappy sandals, beige pumps and other elegant footwear she typically pairs with full skirts and midi dresses as she makes her dangerous rounds to secret military bases and mobile medical units.

The former adviser to the Ukrainian Defense Ministry with graduate degrees in banking and finance is a familiar sight to officers and troops in eastern Ukraine. For eight years after Moscow seized Ukraine's Crimean Peninsula in 2014, Voronkova dedicated her life to providing tactical medical training and equipment for Ukrainian forces fighting pro-Russia separatists.

Russia's invasion of Ukraine in late February has created exponentially more need for her organization, Volunteers Hundred Dobrovolia, and new challenges.

Working on their own, Voronkova and her assistant, Yevhen Veselov, drive a van filled with donated supplies - everything from night vision goggles and battlefield basics like tourniquets and medical staplers to the advances equipment needed for brain surgery — swiftly through checkpoints, irrespective of curfews. Servicemen recognize Voronkova and with one look, let them through.

The smell of her sweet cherry cigarillos fills the air when she gets out of her van to smoke one with her manicured red nails. Although she manages 20 people and lives in Kyiv, Voronkova has been in eastern Ukraine since the Russians focused their attention there in April, and she insists on delivering first-aid kits to the front line herself.

“A woman is like the neck of the head. She moves everything,” she said.

Voronkova grew up loving medicine, but her family did not want her to pursue it. They were bankers and thought she should take the same career path. The separatist conflict that started in 2014 persuaded her to study combat medicine, and she eventually received certification as an instructor.

From 2015 until Russia invaded Ukraine, the Ukrainian Defense Ministry tasked her with finding solutions to problems encountered by army units in the Donbas. Now, she uses her own teaching techniques to help the units protect themselves and their comrades in battle.

“I still remind my mother that when I was in 10th grade, I had a box filled with (over-the-counter) pills, and all my friends at school knew I had medicine for everything," she said. "Unfortunately, I could not pursue my dream. But today I am implementing it by giving aid.”

Martial law has swelled the ranks of Ukraine's defenders, but many of the people who have joined the military during the war entering its sixth month do not have combat experience or the supplies they need.

“It feels like 2014. We need first-aid kits and uniforms for the territorial defense. I think it was created with hardly any time to allocate a budget for them. Therefore, they need support from volunteers,” Voronkova says.

As she brought boxes of scalpels, electrocoagulation devices, emergency catheters and other supplies to a hospital in the city of Kurakhove, the roar of outgoing rockets and incoming shelling did not make her flinch.

Tue, 26 Jul 2022 04:29:00 -0500 en-US text/html https://www.timesunion.com/news/article/Medic-in-heels-commands-respect-on-Ukraine-s-17329227.php?IPID=Times-Union-HP-nation-world-package
Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Improve the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : Pininfarina's 1,900-HP, $1.9 Million Electric Car Is Ready To Roll

The legendary home of some of the world's most stunning car designs is now home to a new project. The first Pininfarina Battista Hyper GT production model has been completed.

"The Battista hyper GT is the realisation of a dream, which began with design icon Battista 'Pinin' Farina's ambition to create a beautiful car bearing only the Pininfarina name," said Per Svantesson, CEO of Automobili Pininfarina.

"We are proud to have achieved that goal and in doing so, we lead a movement into an exciting new luxury era, where design purity and a focus on sustainable innovations will shape a series of incredible new vehicles from Automobili Pininfarina."

Each of the $1.99 million USD Battista Hyper GTs takes up to 1,340 hours of hand-crafting to complete. The process is part of a collaborative design approach where buyers work hand-in-hand with the heralded brand's experts.

The Atleier is a 2,300-square meter facility that is split into 14 production and quality assurance zones.

The Pininfarina Battista Hyper GT sits at the company’s Atelier in Italy. Automobili Pininfarina

Modern vehicle construction techniques are paired with centuries-old coachbuilding know-how, a unique blend that epitomizes what the Pininfarina brand hopes to bring it its customers.

From start to finish, assembly of each Battista takes 10 weeks. Some highly customized models take longer. The Battista Anniversario took 18 weeks to complete.

Each Battesta Hyper GT starts life as a rolling chassis that is comprised of an electric powertrain, T-shaped battery, carbon fiber monocoque, and electrical systems.

Its Body-in-White is primed then bonded to the monocoque. The Goccia roof is fitted, enclosing the cabin and adding rigidity.

After this process, a two-day process measures the car's dimensions to ensure that it is within permitted tolerances.

The first Pininfarina Battista Hyper GT production model pictured on the streets of Monaco. Automobili Pininfarina

The body is then dismounted from the chassis and sent to be painted, a process that takes three to four weeks, on average.

An additional two days is needed to assemble most of the vehicle including wheel arches and the butterfly doors.

Finally, end-of-line checks and commissions are undertaken before the vehicle undergoes a 24-hour wheel and steering alightment including a dip through the water management area to ensure the model is water tight.

Pininfarina's Atelier completed the first production model then took it to Monaco for its debut client test experience before shipping the car to the United States, where it will take part in Monterey Motor Week.

Its arrival in the U.S. is one year after the automaker made a splash during its first appearance at Monterey Motor Week by showcasing the model on a drive route along the Pacific coast.

Just 150 Battista Hyper GTs will be produced.

Wed, 13 Jul 2022 04:46:00 -0500 en text/html https://www.newsweek.com/pininfarinas-1900-hp-29-million-electric-car-ready-roll-1723243
Killexams : Top speaker line-up announced for the Future of Education summit 2022

Leaders in business and academia will tackle The Pathway to Digital Transformation as they engage in one-on-one interviews and panel discussions at the 8th Annual Future of Education Summit, by CNBC AFRICA in partnership with FORBES AFRICA. This free-to-attend, virtually hosted event takes place on Friday, 29 July from 10.30am to 3.30pm, and is set to lead the dialogue on digital solutions in the tertiary education sector.

"We're very excited to welcome global leaders who have navigated digital platforms and are advancing the access and functionality of this space for the continent," said Dr Rakesh Wahi, Co-founder of the ABN Group and Founder of the Future of Education Summit. "The time for adopting digital solutions is now, but navigating a path that overcomes the challenges faced by the continent requires collaboration. That's why we're so looking forward to the solutions-driven approach of our speakers."

Dr Wahi, who will be welcoming this year's audience, is a visionary entrepreneur who has been involved with early-stage investments in emerging markets for the last 30 years. He is a well-respected member of the investment community and has distinguished himself in the field of IT, telecoms, media, technology and education investments. Alongside his role in the summit and with ABN Group, Dr Wahi is Chairman of CMA Investment Holdings that has representation through its portfolio companies in over 20 countries.

The 2022 Future of Education panellists

Bradley Pulford, Managing Director for HP, is one of the high-profile speakers joining the Future of Education Summit. In his role as Managing Director for HP Africa, Pulford is desparate to support and enhance the continent's rapidly accelerating economic growth and further HP's vision of diversity and inclusion.

Pulford is set to unpack the importance of digital equity in elevating the African education system during the Technology Challenges in Teaching and Learning panel discussion. exact research conducted by HP shows that, while educators are positive about the future of the profession, there is an urgent need to Improve their soft skills for future-proofing classrooms. During his discussion, Pulford will unpack how private-public partnerships contribute to elevating the education fraternity and providing long-term support for educators.

Pulford will be joined by Dan Adkins, Group CEO for Transnational Academic Group, responsible for teaching in the Foundation and Business programmes. With a solid grounding in the IT industry worldwide, and an MBA and a Post-Graduate Certificate in Business Research from Herriot-Watt University, Adkins is well-versed in the uses of technology in the tertiary sector. He has lectured at university level across a number of subjects, and has overseen the development of multiple foundation programmes while also providing seminars on education for TEDx.

Also speaking on the syllabu is Prof Barry Dwolatzky, an Emeritus Professor and Director of Innovation Strategy at the University of Witwatersrand. Prof Dwolatzky, who has more than 30 years of experience leading students into the digital future, also serves as the Chief Visionary Officer for the Tshimologong Precinct, and is the Director and CEO of the Joburg Centre for Software Engineering.

He will be joined by Suraj Shah, Lead for the Regional Centre for Innovative Teaching and Learning at Mastercard Foundation (the Centre) who is responsible for the implementation of partnerships between the Centre and the various governments and ministries of education in Africa. He is currently aligning EdTech entrepreneurs with the governments of Rwanda, Kenya, Ethiopia and Ghana, with the view to scale up technology innovations to Improve teaching and learning in secondary education at scale. He is passionate about women's empowerment and nurturing innovation and research among in sub-Saharan Africa.

The syllabu of Digital Transformation in Education will be taken on by Prof Gary Martin, CEO and Executive Director of the Australian Institute of Management since 2012. He is tasked with leading all aspects of the business, focussed on building leadership, management and workplace capability in Australia and internationally, across the corporate, government, not-for-profit and community sectors.

He is joined by Dr Kirti Menon, the Senior Director for the Division for Teaching Excellence at the University of Johannesburg who has served on national task teams with a research focus on access, exclusion and redress in higher education. As a Research Associate affiliated to the UJ Faculty of Education, Dr Menon is widely published in the fields of higher education, curriculum transformation, social exclusion and access.

Another expert addressing digital transformation is Prof Seth Kunin, Deputy Vice-Chancellor of Curtin University, Australia's seventh largest university – and one of the most international. Kunin's portfolio includes international relations; marketing, recruitment and admissions; transnational education through branch campuses and partnerships; study abroad and exchange; international scholarships; and quality.

Prof Mark Smith, President and Vice-Chancellor University of Southampton, brings in-depth knowledge to the panel having published more than 380 papers on advanced magnetic resonance techniques throughout his career. In his position at the university, he also holds a number of external appointments including membership of Higher Education Statistics Agency (HESA) Board; Senior Independent Member of UKRI EPSRC's Council; and board member of the Higher Education Funding Council for Wales, chairing their Research Wales Committee.

Unpacking Lessons from Covid & Developed World Transformation Strategies for African Education is another esteemed panel line-up, among them Prof Stan du Plessis, COO and Economics Professor and Stellenbosch University, a specialist in macroeconomics and monetary policy who has advised the South African Reserve Bank and National Treasury on macroeconomic policy.

Prof Kirk Semple, Director of International Engagement of Lancaster Environment Centre at Lancaster University; will share his insights garnered over 30 years in academia, specialising in environmental microbiology. In his current role, he's been involved in international activities and partnerships for the university, specifically in sub-Saharan Africa.

Prof Zeblon Vilakazi, Vice-Chancellor and Principal at the University of the Witwatersrand, has been instrumental in establishing South Africa's first experimental high-energy physics research group at CERN, working on the Large Hadron Collider. He has fostered international collaborative research as Director of iThemba LABS where he initiated a flagship rare, isotope beam (RIB) project. He has also played a role in securing a place for African academic partners in the development of practical applications through access to the IBM Quantum Computing network.

They are joined by Adetomi Soyinka, Director of Programmes and Regional Portfolio Lead for the British Councils Higher Education Programme in Sub Saharan Africa, with more than 15 years' experience working in the commercial and international development sectors and a demonstrated track record of achievement in the design and delivery of multiple youth centred projects across education, skills for employability and enterprise.

The British Council is collaborating on the summit, showcasing its commitment to investing in education and opportunity in Africa. Commenting on this, Soyinka said: "Education and innovation are critical pathways to Improve economic well-being of Africa's future, and being part of this summit aligns with our vision of connecting international education communities, identifing mutually beneficial collaboration areas, removing learning barriers, and facilitating partnerships between various higher education sectors in Africa."

Tackling the Transformation of Higher Education Leadership is Prof Malcolm McIver, CEO and Provost of Lancaster University in Ghana. He's an experienced academic and education manager with a successful history of working in the higher education industry, international education, and transnational education.

Jon Foster-Pedley, Dean and Director of Henley Business School in Africa – the first school to be accredited by The Association of Africa Business schools (AABS) - will also lend his expertise to the panel. Henley forms part of the Henley Business School UK, a leading global business school with campuses in Europe, Asia and Africa. He boasts 45 years of international working experience as a professor of innovation, MBA director, director of executive education, designer and director of numerous executive education programmes, and lecturer in strategy, innovation and executive learning. His interests are economic and educational transformation, sustainability and business evolution.

Jaye Richards-Hill, Director of Education Industry for Middle East and Africa, Microsoft Corp, will also provide her unique perspective on the syllabu when she joins the panel. She has more than 30 years of international experience in teaching and training in the education and corporate sectors. Richards-Hill has also worked on government-level projects, including the exact Operation Phakisa Education Lab for the Office of the President in South Africa; and the Scottish Qualifications Authority Future Models of Assessment group; and was a member of the ICT in Education Excellence Group - a collective of education experts which advised the Scottish Secretary of State for Education on reforms to the national eLearning project and technology-driven transformation.

For the panel discussion on The Schools Business: Digital Transformation in Formal K-12 Schooling and Supplementary Tutoring, audiences can look forward to hearing from Edward Mosuwe, Head of Gauteng Department of Education, responsible for the overall leadership and management of the department, as well as serving as the accounting officer. Mosuwe has extensive experience in education having served as an academic at the then Technikon Witwatersrand (now the University of Johannesburg) and as a policy developer and a bureaucrat within the public service at national level.

Joining Mosuwe on the panel is Stacey Brewer, Co-founder and CEO of Spark Schools, an independent private school network which provides high quality, affordable education to previously underserved communities. Dean McCoubrey, Founder of the multi-award-winning EdTech Digital Citizenship Program and MySociaLife - teaching pupils media literacy and online safety – also joins the panel. He brings valuable insight into online learning, currently training Child Psychiatry Units on the latest online challenges to child development. Dean has also spoken at the World Innovation Summit for Education in Qatar (2019), The World Education Conference in Mumbai (2020) and World Mental Health Congress (June 2021), alongside many local education and mind health events.

Yandiswa Xhakaza, Director and Principal of UCT Online High School – one of the event sponsors -will join the discussion, bringing her expertise as an educationalist with significant experience in South Africa's basic education sector.

"I'm delighted to be joining the Future of Education Summit this year as a key speaker on behalf of UCT Online High School, our extended team of teachers, learning designers and support coaches," said Xhakaza. "I will be discussing UCT Online High School's successes to date, market impact, learning technology advancements and unpacking the issue of the digital divide. Along with Valenture Institute, we're committed to accelerating access to world-class high school education, so that we can unleash South Africa's potential."

The 2022 Future of Education individual speakers

This year's keynote address will be given by Prof Andy Schofield, Vice-Chancellor of Lancaster University and an award-winning theoretical physicist working in the area of condensed matter physics specialising in correlated electrons. He studied Natural Sciences followed by a PhD at Gonville and Caius College, Cambridge where he was appointed to a Research Fellowship in 1992. He moved to the USA in 1994 working at Rutgers for two years before returning to Cambridge.

During a one-on-one session, Bello Tongo the CEO of Tongston Entrepreneurship will discuss the syllabu Incorporating Entrepreneurship Thinking in Education from Primary to Tertiary Levels. Tongo has extensive experience as a multi- multi-award-winning entrepreneur, educator and industry leader whose company is one of the top 50 global education organisations according to the Global Forum for Education & Learning.

Prof Tshilidzi Marwala, the Vice-Chancellor of the University of Johannesburg and recently appointed Deputy Chair of the Presidential Commission on the Fourth Industrial Revolution will also engage in a one-one-one interview focusing on Transformation in the Education Sector. As an accomplished scholar with multi-disciplinary research interests – artificial intelligence in engineering, computer science, finance, social science and medicine – Prof Marwala will bring unique insights into this topic.

Included in this year's one-one-one interview is Robert Paddock, the CEO and Founder of Valenture Institute, a social enterprise turning physical limitations into digital opportunities by enabling students to choose an aspirational school regardless of their circumstances.

Don't miss out on these dynamic discussions that unlock technological potential for the tertiary education space! To book your place at the free-to-attend Future of Education webinar, register here https://hopin.com/events/future-of-education-summit-29-july-2022.

CNBC AFRICA in partnership with FORBES AFRICA extended thanks to the sponsors, The British Council, HP and the Transnational Academic Group, and UCT Online High School.

Tue, 26 Jul 2022 06:04:00 -0500 text/html https://www.businessghana.com/site/news/general/267381/Top-speaker-line-up-announced-for-the-Future-of-Education-summit-2022
Killexams : Understanding NIST’s post-quantum encryption standardization and next steps for CISOs

By Duncan Jones, Head of Cybersecurity at Quantinuum

In a exact National Security Memo (NSM-10), the White House acknowledged the need for immediacy in addressing the threat of quantum computers to our current cryptographic systems and mandated agencies to comply with its initial plans to prepare. It’s the first directive that mandates specific actions for agencies as they begin a very long and complex migration to quantum-resistant cryptography. Many of the actions required of agencies depend on new cryptographic algorithms that have just been chosen by the National Institute of Standards and Technology, although final standardization will take 18 to 24 months.

What should CISOs be doing to prepare for the risks of quantum computers and to comply with NSM-10 requirements? They should start by gaining an understanding of the new algorithm standards, and from there, focus on inventorying the agency’s most important information and assets. 

NIST to the rescue

In as little as a decade, quantum computers will break many of the encryption schemes in use today, such as the popular RSA algorithm that we use for encrypting internet data and for digitally signing transactions. An attacker with a powerful quantum computer will be able to read data encrypted by an RSA public key or forge transactions signed by an RSA private key. Worse, a category of attack known as “hack now, decrypt later” may already be under way. Attackers who record data using quantum-vulnerable algorithms now can retrospectively decrypt it in the future using quantum computers. For any agency or contractor that shares data with a long sensitivity lifespan, this is a real concern.

Fortunately, the academic world has not been sitting idle. Since 2016, NIST has been working with the cryptographic community to identify and standardize new quantum-proof encryption algorithms. The NIST process will help ensure that these algorithms become standardized in Federal Information Processing Standards publications and are ready for consumption by federal authorities. As such, it’s important for CISOs to familiarize themselves with the new algorithms and their properties.

Each post-quantum algorithm has three different security levels defined—SL1, SL3 and SL5. These levels are very similar to key sizes in today’s algorithms. Much like 4096-bit RSA keys are stronger than 1024-bit RSA keys, SL5 is stronger than SL3 and SL1. However, that increased security comes at a cost. SL5 keys are typically larger to store and result in slower computations. It’s also notable that post-quantum algorithms cannot be used for both encryption and data signing. Instead, they are used for only one task or the other. This means we will be replacing a single algorithm, such as RSA, with two separate algorithms.

The table below shows some of the characteristics of the selected algorithms.

Algorithm Type Family Public Key Size Ciphertext/Signature Size
CRYSTALS-KYBER Key Establishment Lattice-based 1.6KB - 3.1KB 0.8KB - 1.5 KB
CRYSTALS-Dilithium Signature Lattice-based 2.5KB - 4.8KB 2.4KB - 4.6KB
Falcon Signature Lattice-based 1.2KB - 2.3KB 0.7KB - 1.3KB
SPHINCS+ Signature Hash-based 0.03KB-0.06KB 7.7KB - 49KB

For immediate action

According to NIST’s chief of the Computer Security Division, Matt Scholl, “…don't wait for the standard to be done. Start inventorying your most important information. Ask yourself what is that data that an adversary is going to want to break into first.”

According to NSM-10, leaders from the Office of Management and Budget, the Cybersecurity and Infrastructure Security Agency, NIST and the National Security Agency will be establishing requirements for inventorying all currently deployed cryptographic systems within six months of the May 4 memo. Within a year—and on an annual basis—“…heads of all federal civilian executive branch agencies shall deliver to the director of CISA and the national cyber director an inventory of their IT systems that remain vulnerable to CRQCs.”

Agency inventory requirements will include: 

  • A list of key information technology assets to prioritize
  • Interim benchmarks
  • A common—and preferably automated—assessment process for evaluating progress on quantum-resistant cryptographic migration in IT systems

Migrating an agency or department to a fully post-quantum position is a complex process that will take many years. Although these post-quantum algorithms will not be ready for widespread production use until the standardization process finishes in 2024, considerable work—now mandated under NSM-10 directive—must be done to prepare for these changes, starting with the inventorying process. 

Next steps for federal CISOs

Identify data assets and use of cryptography. Before you can prioritize migration, you need to understand exactly what data you have, and how vulnerable it is to attack. Data that is particularly sensitive and vulnerable to the “hack-now, decrypt-later” attacks should be prioritized above less sensitive data that isn’t transmitted freely. CISOs should start cataloging where quantum-vulnerable algorithms are currently being used. For a variety of reasons, not all systems will be affected equally. CISOs need a very clear picture of the vulnerabilities present in each of their systems.

Speak with vendors. Now is the perfect time to be asking your vendors about their plans for adopting post-quantum algorithms. A good vendor should have a clear roadmap already in place and be testing the candidate algorithms in preparation for 2024.

Test algorithms for home-grown software. Post-quantum algorithms have different properties than the algorithms we use today. The only way to know how they will affect your systems is to implement them and experiment. To assist with potential compatibility issues, NSM-10 encourages agency heads to begin conducting “…tests of commercial solutions that have implemented pre-standardized quantum-resistant cryptographic algorithms.” 

A good place to start is with the Open Quantum Safe project, which provides many different implementations of post-quantum algorithms designed for experimentation. 

Quantum is not all bad news. It is worth remembering that quantum computing also offers new techniques for strengthening existing systems. Quantum computers are already being used today to generate stronger cryptographic keys. In the future, once this migration to post-quantum algorithms is behind us, we’ll view quantum as a gift to cybersecurity, not a threat.

 Duncan Jones is the head of cybersecurity at Quantinuum.

Tue, 26 Jul 2022 06:00:00 -0500 en text/html https://gcn.com/cybersecurity/2022/07/understanding-nists-post-quantum-encryption-standardization-and-next-steps-cisos/374930/?oref=gcn-skybox-hp
Killexams : Stolen Credentials Selling on the Dark Web for Price of a Gallon of Gas

HP Inc.

New HP Wolf Security report exposes ironic “honor among thieves” as cybercriminals rely on dispute resolution services, $3k vendor bonds and escrow payments to ensure “fair” dealings

PALO ALTO, Calif. , July 21, 2022 (GLOBE NEWSWIRE) -- HP Inc. (NYSE: HPQ) today released The Evolution of Cybercrime: Why the Dark Web is Supercharging the Threat Landscape and How to Fight Back – an HP Wolf Security Report. The findings show cybercrime is being supercharged through “plug and play” malware kits that make it easier than ever to launch attacks. Cyber syndicates are collaborating with amateur attackers to target businesses, putting our online world at risk.

The HP Wolf Security threat team worked with Forensic Pathways, a leading group of global forensic professionals, on a three-month dark web investigation, scraping and analyzing over 35 million cybercriminal marketplaces and forum posts to understand how cybercriminals operate, gain trust, and build reputation.

Key findings include:

  • Malware is cheap and readily available – Over three quarters (76%) of malware advertisements listed, and 91% of exploits (i.e. code that gives attackers control over systems by taking advantage of software bugs), retail for under $10 USD. The average cost of compromised Remote Desktop Protocol credentials is just $5 USD. Vendors are selling products in bundles, with plug-and-play malware kits, malware-as-a-service, tutorials, and mentoring services reducing the need for technical skills and experience to conduct complex, targeted attacks – in fact, just 2-3% of threat actors today are advanced coders1.

  • The irony of ‘honor amongst cyber-thieves’ – Much like the legitimate online retail world, trust and reputation are ironically essential parts of cybercriminal commerce: 77% of cybercriminal marketplaces analyzed require a vendor bond – a license to sell – which can cost up to $3,000. 85% of these use escrow payments, and 92% have a third-party dispute resolution service. Every marketplace provides vendor feedback scores. Cybercriminals also try to stay a step ahead of law enforcement by transferring reputation between websites – as the average lifespan of a dark net Tor website is only 55 days.

  • Popular software is giving cybercriminals a foot in the door – Cybercriminals are focusing on finding gaps in software that will allow them to get a foothold and take control of systems by targeting known bugs and vulnerabilities in popular software. Examples include the Windows operating system, Microsoft Office, web content management systems, and web and mail servers. Kits that exploit vulnerabilities in niche systems command the highest prices (typically ranging from $1,000-$4,000 USD). Zero Days (vulnerabilities that are not yet publicly known) are retailing at 10s of thousands of dollars on dark web markets.

“Unfortunately, it’s never been easier to be a cybercriminal. Complex attacks previously required serious skills, knowledge and resource. Now the technology and training is available for the price of a gallons of gas. And whether it’s having your company ad customer data exposed, deliveries delayed or even a hospital appointment cancelled, the explosion in cybercrime affects us all,” comments report author Alex Holland, Senior Malware Analyst at HP Inc.

“At the heart of this is ransomware, which has created a new cybercriminal ecosystem rewarding smaller players with a slice of the profits. This is creating a cybercrime factory line, churning out attacks that can be very hard to defend against and putting the businesses we all rely on in the crosshairs,” Holland adds.

HP consulted with a panel of experts from cybersecurity and academia – including ex-black hat hacker Michael ‘Mafia Boy’ Calce and authored criminologist, Dr. Mike McGuire – to understand how cybercrime has evolved and what businesses can do to better protect themselves against the threats of today and tomorrow. They warned that businesses should prepare for destructive data denial attacks, increasingly targeted cyber campaigns, and cybercriminals using emerging technologies like artificial intelligence to challenge organizations’ data integrity.

To protect against current and future threats, the report offers up the following advice for businesses:

Master the basics to reduce cybercriminals’ chances: Follow best practices, such as multi-factor authentication and patch management; reduce your attack surface from top attack vectors like email, web browsing and file downloads; and prioritize self-healing hardware to boost resilience.

Focus on winning the game: plan for the worst; limit risk posed by your people and partners by putting processes in place to vet supplier security and educate workforces on social engineering; and be process-oriented and rehearse responses to attacks so you can identify problems, make improvements and be better prepared.

Cybercrime is a team sport. Cybersecurity must be too: talk to your peers to share threat information and intelligence in real-time; use threat intelligence and be proactive in horizon scanning by monitoring open discussions on underground forums; and work with third-party security services to uncover weak spots and critical risks that need addressing.

“We all need to do more to fight the growing cybercrime machine,” says Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc. “For individuals, this means becoming cyber aware. Most attacks start with a click of a mouse, so thinking before you click is always important. But giving yourself a safety net by buying technology that can mitigate and recover from the impact of bad clicks is even better.”

“For businesses, it’s important to build resiliency and shut off as many common attack routes as possible,” Pratt continues. “For example, cybercriminals study patches on release to reverse engineer the vulnerability being patched and can rapidly create exploits to use before organizations have patched. So, speeding up patch management is important. Many of the most common categories of threat such as those delivered via email and the web can be fully neutralized through techniques such as threat containment and isolation, greatly reducing an organization’s attack surface regardless of whether the vulnerabilities are patched or not.”

You can read the full report here https://threatresearch.ext.hp.com/evolution-of-cybercrime-report/

Media contacts:
Vanessa Godsal / vgodsal@hp.com

About the research

The Evolution of Cybercrime – The Evolution of Cybercrime: Why the Dark Web is Supercharging the Threat Landscape and How to Fight Back – an HP Wolf Security Report is based on findings from:

  1. An independent study carried out by dark web investigation firm Forensic Pathways and commissioned by HP Wolf Security. The firm collected dark web marketplace listings using their automated crawlers that monitor content on the Tor network. Their Dark Search Engine tool has an index consisting of >35 million URLs of scraped data. The collected data was examined and validated by Forensic Pathway’s analysts. This report analyzed approximately 33,000 active websites across the dark web, including 5,502 forums and 6,529 marketplaces. Between February and April 2022, Forensic Pathways identified 17 recently active cybercrime marketplaces across the Tor network and 16 hacking forums across the Tor network and the web containing relevant listings that comprise the data set.

  2. The report also includes threat telemetry from HP Wolf Security and research into the leaked communications of the Conti ransomware group.

  3. Interviews with and contributions from a panel of cybersecurity experts including:

    • Alex Holland, report author, Senior Malware Analyst at HP Inc.

    • Joanna Burkey, Chief Information Security Officer at HP Inc.

    • Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc.

    • Boris Balacheff, Chief Technologist for Security Research and Innovation at HP Labs, HP Inc.

    • Patrick Schlapfer, Malware Analyst at HP Inc.

    • Michael Calce, former black hat “MafiaBoy”, HP Security Advisory Board Chairman, CEO of decentraweb, and President of Optimal Secure.

    • Dr. Mike McGuire, senior lecturer of criminology at the University of Surrey, UK and authored expert on cybersecurity.

    • Robert Masse, HP Security Advisory Board member and Partner at Deloitte.

    • Justine Bone, HP Security Advisory Board member and CEO at Medsec.

About HP

HP Inc. is a technology company that believes one thoughtful idea has the power to change the world. Its product and service portfolio of personal systems, printers, and 3D printing solutions helps bring these ideas to life. Visit http://www.hp.com.

About HP Wolf Security

From the maker of the world’s most secure PCs2 and Printers3, HP Wolf Security is a new breed of endpoint security. HP’s portfolio of hardware-enforced security and endpoint-focused security services are designed to help organizations safeguard PCs, printers, and people from circling cyber predators. HP Wolf Security provides comprehensive endpoint protection and resiliency that starts at the hardware level and extends across software and services.

©Copyright 2022 HP Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

1 According to Michael Calce, former black hat “MafiaBoy”, HP Security Advisory Board Member, CEO of decentraweb, and President of Optimal Secure
2 Based on HP’s unique and comprehensive security capabilities at no additional cost among vendors on HP Elite PCs with Windows and 8th Gen and higher Intel® processors or AMD Ryzen™ 4000 processors and higher; HP ProDesk 600 G6 with Intel® 10th Gen and higher processors; and HP ProBook 600 with AMD Ryzen™ 4000 or Intel® 11th Gen processors and higher.
3 HP’s most advanced embedded security features are available on HP Enterprise and HP Managed devices with HP FutureSmart firmware 4.5 or above. Claim based on HP review of 2021 published features of competitive in-class printers. Only HP offers a combination of security features to automatically detect, stop, and recover from attacks with a self-healing reboot, in alignment with NIST SP 800-193 guidelines for device cyber resiliency. For a list of compatible products, visit: hp.com/go/PrintersThatProtect. For more information, visit: hp.com/go/PrinterSecurityClaims.

Thu, 21 Jul 2022 04:34:00 -0500 en-US text/html https://finance.yahoo.com/news/stolen-credentials-selling-dark-price-163000009.html
HPE6-A42 exam dump and training guide direct download
Training Exams List