A perfect key to success by these 000-676 free pdf

If you really to show your professionalism so just Passing the 000-676 exam is not sufficient. You should have enough xSeries - Linux Installation/Performance Optimization knowledge that will help you work in real world scenarios. Killexams.com specially focus to improve your knowledge about 000-676 objectives so that you not only pass the exam, but really get ready to work in practical environment as a professional.

Exam Code: 000-676 Practice exam 2022 by Killexams.com team
xSeries - Linux Installation/Performance Optimization
IBM Installation/Performance approach
Killexams : IBM Installation/Performance approach - BingNews https://killexams.com/pass4sure/exam-detail/000-676 Search results Killexams : IBM Installation/Performance approach - BingNews https://killexams.com/pass4sure/exam-detail/000-676 https://killexams.com/exam_list/IBM Killexams : Observability: Why It’s a Red Hot Tech Term

Recently, IBM struck a deal to acquire Databand.ai, which develops software for data observability. The purchase amount was not announced. However, the acquisition does show the importance of observability, as IBM has acquired similar companies during the past couple years.

“Observability goes beyond traditional monitoring and is especially relevant as infrastructure and application landscapes become more complex,” said Joseph George, Vice President of Product Management, BMC.  “Increased visibility gives stakeholders greater insight into issues and user experience, reducing time spent firefighting, and creating time for more strategic initiatives.”

Observability is an enormous category. It encompasses log analytics, application performance monitoring (APM), and cybersecurity, and the term has been applied in other IT areas like networking. For example, in terms of APM, spending on the technology is expected to hit $6.8 billion by 2024, according to Gartner.

So then, what makes observability unique? And why is it becoming a critical part of the enterprise tech stack? Well, let’s take a look.

Also read: Top Observability Tools & Platforms

How Observability Works

The ultimate goal of observability is to go well beyond traditional monitoring capabilities by giving IT teams the ability to understand the health of a system at a glance.

An observability platform has several important functions. One is to find the root causes of a problem, which could be a security breach or a bug in an application. In some cases, the system will offer a fix. Sometimes an observability platform will make the corrections on its own.

“Observability isn’t a feature you can install or a service you can subscribe to,” said Frank Reno, Senior Product Manager, Humio. “Observability is something you either have, or you don’t. It is only achieved when you have all the data to answer any question about the health of your system, whether predictable or not.”

The traditional approach is to crunch huge amounts of raw telemetry data and analyze it in a central repository. However, this could be difficult to do at the edge, where there is a need for real-time solutions.

“An emerging alternative approach to observability is a ‘small data’ approach, focused on performing real-time analysis on data streams directly at the source and collecting only the valuable information,” said Shannon Weyrick, vice president of research, NS1. “This can provide immediate business insight, tighten the feedback loop while debugging problems, and help identify security weaknesses. It provides consistent analysis regardless of the amount of raw data being analyzed, allowing it to scale with data production.”

Also read: Observability’s Growth to Evolve into Automation Solutions in 2022

The Levers for Observability

The biggest growth factor for observability is the strategic importance of software. It’s become a must-have for most businesses.

“Software has become the foundation for how organizations interact with their customers, manage their supply chain, and are measured against their competition,” said Patrick Lin, VP of Product Management for Observability, Splunk. “Particularly as teams modernize, there are a lot more things they have to monitor and react to — hybrid environments, more frequent software changes, more telemetry data emitted across fragmented tools, and more alerts. Troubleshooting these software systems has never been harder, and the way monitoring has traditionally been done just doesn’t cut it anymore.”

The typical enterprise has dozens of traditional tools for monitoring infrastructure, applications and digital experiences. The result is that there are data silos, which can lessen the effectiveness of those tools. In some cases, it can mean catastrophic failures or outages.

But with observability, the data is centralized. This allows for more visibility across the enterprise.

“You get to root causes quickly,” said Lin. “You understand not just when an issue occurs but what caused it and why. You Improve mean time to detection (MTTD) and mean time to resolution (MTTR) by proactively detecting emerging issues before customers are impacted.”

Also read: Dynatrace vs Splunk: Monitoring Tool Comparison

Observability Challenges

Of course, observability is not a silver bullet. The technology certainly has downsides and risks.  

In fact, one of the nagging issues is the hype factor. This could ultimately harm the category.  “There is a significant amount of observability washing from legacy vendors, driving confusion for end users trying to figure out what observability is and how it can benefit them,” said Nick Heudecker, Senior Director of Market Strategy & Competitive Intelligence, Cribl.

True, this is a problem with any successful technology. But customers definitely need to do the due diligence.

Observability also is not a plug-and-play technology.There is a need for change management. And yes, you must have a highly skilled team to get the max from the technology.

“The biggest downside of observability is that someone – such as an engineer or a person from DevOps or the site reliability engineering (SRE) organization — needs to do the actual observing,” said Gavin Cohen, VP of Product, Zebrium.  “For example, when there is a problem, observability tools are great at providing access and drill-down capabilities to a huge amount of useful information. But it’s up to the engineer to sift through and interpret that information and then decide where to go next in the hunt to determine the root cause. This takes skill, time, patience and experience.”

Although, with the growth in artificial intelligence (AI) and machine learning (ML), this can be addressed. In other words, the next-generation tools can help automate the observer role. “This requires deep intelligence about the systems under observation, such as with sophisticated modeling, granular details and comprehensive AI,” said Kunal Agarwal, founder and CEO, Unravel Data.

Read next: AI and Observability Platforms to Alter DevOps Economics

Tue, 19 Jul 2022 02:53:00 -0500 en-US text/html https://www.itbusinessedge.com/it-management/observability-is-hot/
Killexams : Cloud Computing

IBM Security released the annual Cost of a Data Breach Report, finding costlier and higher-impact data breaches than ever before, with the global average cost of a data breach reaching an all-time high of $4.35 million for studied organizations. With breach costs increasing nearly 13% over the last two years of the report, the findings suggest these incidents may also be contributing to rising costs of goods and services.

Posted August 08, 2022

Tone Software Corporation, a global provider of management and productivity solutions for IBM Z mainframes, is acquiring the JES2Mail, JES2FTP, Mail2ZOS, and CICS2PDF host output transformation and delivery products from CASI Software, Inc. Effective June 1, 2022, the acquisition of the CASI JES2Mail suite will expand Tone's OMC z/OS Output Management offerings for mainframe shops seeking to deliver the right information to the right users, in the most cost effective format for the business. 

Posted August 08, 2022

What's the hardest part of managing a merger, acquisition, or divestiture? The answer may seem to be getting lawyers and regulators to sign off on the deal or handling the business restructuring that follows. But here's another hairy M&A and divestiture challenge which executives too often underestimate: migrating massive amounts data, most of it unstructured, between entities.

Posted August 08, 2022

Dremio is extending its partnership with Amazon Web Services (AWS), announcing that Dremio Cloud is now available to purchase in the AWS Marketplace. Dremio Cloud, being available to purchase in AWS Marketplace, provides businesses with the freedom and flexibility to use their preferred procurement vehicle to adopt the open lakehouse platform, according to the vendor.

Posted August 04, 2022

Tresata is launching its Digital Business Platform (DBP) to aid businesses in delivering functional data to initialize digital transformations exclusively for data clouds. Through leverage of data exabytes in the cloud, DBP will be able to utilize enterprise data that would otherwise be abandoned to an unused data percentage. Automation of this utilization eradicates the need for extensive time and resources dedicated to monitoring and managing usable data.

Posted August 04, 2022

Flow Security is announcing its $10M seed round that facilitated the launch of a data security system that can locate and protect data, both at rest and in motion. Led by Amiti and backed by industry leaders like CyberArk CEO Udi Mokady, Demisto CEO, and co-founder Slavik Markovich, Flow Security's funding tackles data sprawl and security issues resulting from the industry shift towards cloud systems for data management.

Posted August 04, 2022

ManageEngine is releasing Analytics Plus, an IT analytics product that is structured to consolidate analytics deployment onto a singular platform through a newly available SaaS option. Users will be able to easily deploy analytics on either public or private clouds in under 60 seconds, according to the vendor. The program will have the capability to be deployed in on-premises servers, Docker, or cloud platforms like Google Cloud and Azure.

Posted August 03, 2022

Ask a data engineer why they got into the field, and they'll likely share how they looked forward to bringing concepts to life or solving complex challenges; or they wanted to share their expertise in a collaborative, agile environment; or they simply wanted to provide better visibility into the way products work for end users—and how it could be improved.

Posted August 03, 2022

Hazelcast, Inc., home of the real-time data platform, is introducing the beta release of a new serverless offering under its Viridian cloud portfolio, dubbed Hazelcast Viridian Serverless. The platform enables companies to take immediate action on real-time data by speeding app development, simplifying provisioning, and enabling flexible and robust integration of real-time data into applications, according to the vendor.

Posted August 02, 2022

The latest version of the BackBox Automation Platform revolutionizes customer experience through on-premises or cloud capability options for SaaS. This release provides network automation for managed service providers (MSPs), managed security service providers (MSSP), and enterprise users who will now be able to decide which configuration best suits their needs in increasingly hybrid environments. Customers will also have access to new features, like the executive dashboard, which impact inventory analytics and automation performance for concise automation processes.

Posted August 02, 2022

CircleCI has announced its collaboration with GitLab Inc. in order to provide native support for customers seeking accessibility between GitLab Inc., The One DevOps Platform for software innovation, and CircleCI. As a result, customers will have access to tools available in all the aforementioned systems, allowing for greater choice and flexibility in software innovation processes.

Posted August 01, 2022

Acceldata has announced an alternative method for long-term data platform independence that will be available for Hortonworks Data Platform (HDP) and Cloudera Data Hub (CDH) customers. This alternative choice will streamline data platform usage by offering customers the option to stay on-premises with the current Hadoop release or migrate to a selected cloud data platform, eliminating forced migration.

Posted August 01, 2022

Mason, innovator a fully managed infrastructure for developing and delivering dedicated smart devices, is debuting the Mason X-Ray, a fully integrated device management and observability platform for remote debugging and resolution of issues on IoT smart devices.

Posted July 29, 2022

Snyk, a provider of developer security, is introducing Snyk Cloud, a comprehensive cloud Security Solution designed by and for developers. Thoughtfully designed with global DevSecOps teams in mind, Snyk's Cloud Security solution unites and extends existing products Snyk Infrastructure as Code and Snyk Container with Fugue's leading cloud security posture management (CSPM) capabilities, according to the vendor.

Posted July 28, 2022

CData Software, a provider of data connectivity and integration solutions, is offering an all-new AWS Glue client connector for CData Connect Cloud, expanding access to hundreds of data sources and destinations for AWS Glue customers. CData now offers cloud-native connectivity solutions that seamlessly integrate with the applications and systems previously unavailable within AWS Glue.

Posted July 28, 2022

Striim, Inc. is expanding its existing agreement with Microsoft to meet increasing customer demand for real-time analytics and operations by enabling customers to leverage Striim Cloud on Microsoft Azure. This move allows for continuous, streaming data integration from on-premises and cloud enterprise sources to Azure Synapse Analytics and Power BI, taking full advantage of the Microsoft Intelligent Data Platform.

Posted July 28, 2022

BigID, provider of a data intelligence platform, is introducing security and privacy aware access control for AWS Cloud infrastructure to reduce risk and automate role-based policies across AWS including S3, Redshift, Athena, EMR, and more with extended integrations. By using BigID, AWS customers can automate intelligent access control to enable and restrict access to their sensitive data—while creating business policies based on data sensitivity and context, according to the vendor.

Posted July 27, 2022

SAP released 27 new and updated Security Notes, including six High Priority notes, during its July patch release. Onapsis Research Labs (ORL) supported SAP in patching a Missing Authorization Check vulnerability in the highly sensitive SAP Enterprise Extension Defense Forces & Public Security application.

Posted July 27, 2022

GFOS mbH announced that its modular HR software, gfos 4.8, is certified by SAP to integrate with cloud solutions from SAP, helping organizations extract employee data from SAP SuccessFactors solutions to further process the information within the gfos software.

Posted July 27, 2022

OutSystems, a global leader in high-performance application development, announced it is now an official member of the SAP PartnerEdge program, with a Build focus, underscoring its commitment to providing high-value low-code to businesses using SAP solutions. While twice as many OutSystems customers connect to SAP technologies as any other system of record, the new relationship will make it even easier for additional businesses within the SAP ecosystem to discover and connect with OutSystems.

Posted July 27, 2022

The pandemic has expedited businesses' need to digitally transform. Surging digital demands paired with talent shortages while using legacy technologies have made it nearly impossible for businesses to keep pace with change, innovate, and stay ahead of the competition. To meet these challenges, organizations need technologies that make it easier to build applications and streamline workflows. The answer? Low code development.

Posted July 27, 2022

Whether users are delaying the move to Informer 5 due to a lack of migration and IT resources, the thought of transitioning reports, or other projects, Entrinsik wants to help facilitate the migration with some best practices and migration resources. Entrinsik provides users with a Migration Guide that has tools for a successful migration. The Data Gathering Workbook can collect and document the information needed during the Informer 4 to Informer 5 migration to keep track of its progress.

Posted July 27, 2022

Rocket Software is offering a series of livestreams featuring its product roadmap for the future. The Rocket MultiValue product roadmap outlines the vision, direction, priorities, and progress of a product over time. It represents the plan of both short and long-term goals for Rocket's products.

Posted July 27, 2022

Cloudian announced that HyperStore object storage is now validated to work with Microsoft Azure Stack HCI, giving Azure Stack HCI customers the scalability and flexibility benefits of public cloud in a secure and cost-effective, cloud-native storage platform within their own data centers.

Posted July 26, 2022

MANTA, the data lineage platform, is launching Release 37, offering new enhancements to data lineage platform that include modeling tool and scanner upgrades to offer efficient data lineage experiences for customers. With Release 37, MANTA further upgrades its platform to meet the modeling requirements of users scanning new properties into MANTA Flow to infer end-to-end data lineage.

Posted July 26, 2022

Model9, a provider of cloud data management for the IBM Z mainframe, announced it will participate in The Open Mainframe Project, an open source initiative with 20 programs and working groups that enable collaboration across the mainframe community to develop shared tool sets and resources.

Posted July 25, 2022

IBM is expanding its Power10 server line with the introduction of mid-range and scale-out systems to modernize, protect, and automate business applications and IT operations. The new Power10 servers combine performance, scalability, and flexibility with new pay-as-you-go consumption offerings for clients looking to deploy new services quickly across multiple environments.

Posted July 25, 2022

Pecan AI, a provider of AI-based predictive analytics for BI analysts and business teams, is adding one-click model deployment and integration with common CRMs, marketing automation, and other core business systems. Pecan's customers can now take immediate actions based on the highly accurate predictions for future churn, lifetime value, demand and other customer-conversion metrics generated by Pecan, according to the vendor.

Posted July 21, 2022

Dataiku is releasing Dataiku 11, a pivotal update of the company's data science and AI platform that helps organizations deliver on the promise of Everyday AI. This packed release provides new capabilities for expert teams to deliver more value at scale, enables tech-savvy workers to take on more expansive challenges, helps non-technical workers more easily engage with AI, and provides strengthened AI Governance to ensure projects are robust, transparent, and ready for success at scale.

Posted July 20, 2022

TIE Kinetix, a provider of supply chain digitalization and a current member of Oracle PartnerNetwork (OPN), announced that TIE Kinetix FLOW Connector for Oracle Fusion Cloud Supply Chain and Manufacturing is available on Oracle Cloud Marketplace. TIE Kinetix FLOW Connector for Oracle Fusion Cloud Supply Chain and Manufacturing now extends TIE Kinetix's cloud-native solution, EDI-2-FLOW, where Oracle Cloud SCM users have the opportunity to benefit from a fully integrated EDI solution via a standard connector.

Posted July 20, 2022

Oracle NetSuite is making updates to NetSuite Analytics Warehouse, helping organizations further enhance decision making and uncover new revenue streams. The latest updates boost the prebuilt data warehouse and analytics solution for NetSuite customers by making it easier for customers to blend relevant data sets and introducing new pre-built visualization capabilities.

Posted July 20, 2022

Oracle is introducing the Oracle Construction Intelligence Cloud Analytics platform, combing data from Oracle Smart Construction Platform applications to provide owners and contractors a comprehensive understanding of performance throughout their operations. With this insight, organizations can quickly spot and correct issues and target ways to drive continuous improvement across project planning, construction, and asset operation, according to the vendor.

Posted July 20, 2022

Solace, an enabler of event-driven architecture for real-time enterprises, is joining the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners which provide software solutions that run on or integrate with AWS.

Posted July 18, 2022

Kyligence's Intelligent Cloud OLAP Platform now offers support for Amazon EMR Serverless, a serverless option from Amazon Web Services (AWS) that makes it easier for data analysts and engineers to run open source big data analytics frameworks. Using automatic on-demand provisioning and scaling capabilities available through Amazon EMR Serverless, Kyligence Cloud cost effectively meets the changing processing requirements across all data volumes, according to the vendor.

Posted July 15, 2022

Grafana Labs is introducing the Kubernetes Monitoring solution for Grafana Cloud, enabling all levels of Kubernetes usage within an organization. Kubernetes Monitoring is available to all Grafana Cloud users, including those on the generous free tier. Grafana Cloud users can install the Grafana Agent onto their Kubernetes cluster(s) and in minutes, the Kube-state metrics will be shipped to Prometheus and Grafana, according to the vendor.

Posted July 15, 2022

Ensono, an expert technology adviser and managed service provider, is acquiring AndPlus, a cloud native and data engineering firm. This acquisition continues Ensono's strategic investment in scaling its cloud and data engineering capabilities and reinforces the company's commitment to helping clients plan, build, migrate, and operate in the cloud, according to the vendor.

Posted July 14, 2022

AI is delivering new benefits and efficiencies to organizations through greater automation capabilities, ease of use, and accessibility—across a variety of use cases. Spurred by the COVID-19 pandemic and a host of other compounding factors such as climate change and sustainability, supply chain delays, the Great Resignation, and the war in Ukraine, companies are scrambling to take advantage of what AI has to offer during this upheaval.

Posted July 13, 2022

Mon, 15 Mar 2021 13:08:00 -0500 en text/html https://www.dbta.com/Categories/Cloud-Computing-328.aspx
Killexams : Master Data Management

Kyligence, an open source Online Analytical Processing platform (OLAP) for big data, is announcing its completion of System and Organization Controls (SOC) 2 Type II certification, meaning it now adheres to the American Institute of Certified Public Accountants (AICPA).This is preceded by their completion of SOC Type I in 2021, assuring Kyligence's continued compliance through third-party auditing office, Ernst & Young.

Posted August 09, 2022

IBM Security released the annual Cost of a Data Breach Report, finding costlier and higher-impact data breaches than ever before, with the global average cost of a data breach reaching an all-time high of $4.35 million for studied organizations. With breach costs increasing nearly 13% over the last two years of the report, the findings suggest these incidents may also be contributing to rising costs of goods and services.

Posted August 08, 2022

Tone Software Corporation, a global provider of management and productivity solutions for IBM Z mainframes, is acquiring the JES2Mail, JES2FTP, Mail2ZOS, and CICS2PDF host output transformation and delivery products from CASI Software, Inc. Effective June 1, 2022, the acquisition of the CASI JES2Mail suite will expand Tone's OMC z/OS Output Management offerings for mainframe shops seeking to deliver the right information to the right users, in the most cost effective format for the business. 

Posted August 08, 2022

What's the hardest part of managing a merger, acquisition, or divestiture? The answer may seem to be getting lawyers and regulators to sign off on the deal or handling the business restructuring that follows. But here's another hairy M&A and divestiture challenge which executives too often underestimate: migrating massive amounts data, most of it unstructured, between entities.

Posted August 08, 2022

Dremio is extending its partnership with Amazon Web Services (AWS), announcing that Dremio Cloud is now available to purchase in the AWS Marketplace. Dremio Cloud, being available to purchase in AWS Marketplace, provides businesses with the freedom and flexibility to use their preferred procurement vehicle to adopt the open lakehouse platform, according to the vendor.

Posted August 04, 2022

ManageEngine is releasing Analytics Plus, an IT analytics product that is structured to consolidate analytics deployment onto a singular platform through a newly available SaaS option. Users will be able to easily deploy analytics on either public or private clouds in under 60 seconds, according to the vendor. The program will have the capability to be deployed in on-premises servers, Docker, or cloud platforms like Google Cloud and Azure.

Posted August 03, 2022

Ask a data engineer why they got into the field, and they'll likely share how they looked forward to bringing concepts to life or solving complex challenges; or they wanted to share their expertise in a collaborative, agile environment; or they simply wanted to provide better visibility into the way products work for end users—and how it could be improved.

Posted August 03, 2022

Hazelcast, Inc., home of the real-time data platform, is introducing the beta release of a new serverless offering under its Viridian cloud portfolio, dubbed Hazelcast Viridian Serverless. The platform enables companies to take immediate action on real-time data by speeding app development, simplifying provisioning, and enabling flexible and robust integration of real-time data into applications, according to the vendor.

Posted August 02, 2022

Acceldata has announced an alternative method for long-term data platform independence that will be available for Hortonworks Data Platform (HDP) and Cloudera Data Hub (CDH) customers. This alternative choice will streamline data platform usage by offering customers the option to stay on-premises with the current Hadoop release or migrate to a selected cloud data platform, eliminating forced migration.

Posted August 01, 2022

Snyk, a provider of developer security, is introducing Snyk Cloud, a comprehensive cloud Security Solution designed by and for developers. Thoughtfully designed with global DevSecOps teams in mind, Snyk's Cloud Security solution unites and extends existing products Snyk Infrastructure as Code and Snyk Container with Fugue's leading cloud security posture management (CSPM) capabilities, according to the vendor.

Posted July 28, 2022

CData Software, a provider of data connectivity and integration solutions, is offering an all-new AWS Glue client connector for CData Connect Cloud, expanding access to hundreds of data sources and destinations for AWS Glue customers. CData now offers cloud-native connectivity solutions that seamlessly integrate with the applications and systems previously unavailable within AWS Glue.

Posted July 28, 2022

Striim, Inc. is expanding its existing agreement with Microsoft to meet increasing customer demand for real-time analytics and operations by enabling customers to leverage Striim Cloud on Microsoft Azure. This move allows for continuous, streaming data integration from on-premises and cloud enterprise sources to Azure Synapse Analytics and Power BI, taking full advantage of the Microsoft Intelligent Data Platform.

Posted July 28, 2022

BigID, provider of a data intelligence platform, is introducing security and privacy aware access control for AWS Cloud infrastructure to reduce risk and automate role-based policies across AWS including S3, Redshift, Athena, EMR, and more with extended integrations. By using BigID, AWS customers can automate intelligent access control to enable and restrict access to their sensitive data—while creating business policies based on data sensitivity and context, according to the vendor.

Posted July 27, 2022

SAP released 27 new and updated Security Notes, including six High Priority notes, during its July patch release. Onapsis Research Labs (ORL) supported SAP in patching a Missing Authorization Check vulnerability in the highly sensitive SAP Enterprise Extension Defense Forces & Public Security application.

Posted July 27, 2022

GFOS mbH announced that its modular HR software, gfos 4.8, is certified by SAP to integrate with cloud solutions from SAP, helping organizations extract employee data from SAP SuccessFactors solutions to further process the information within the gfos software.

Posted July 27, 2022

OutSystems, a global leader in high-performance application development, announced it is now an official member of the SAP PartnerEdge program, with a Build focus, underscoring its commitment to providing high-value low-code to businesses using SAP solutions. While twice as many OutSystems customers connect to SAP technologies as any other system of record, the new relationship will make it even easier for additional businesses within the SAP ecosystem to discover and connect with OutSystems.

Posted July 27, 2022

The pandemic has expedited businesses' need to digitally transform. Surging digital demands paired with talent shortages while using legacy technologies have made it nearly impossible for businesses to keep pace with change, innovate, and stay ahead of the competition. To meet these challenges, organizations need technologies that make it easier to build applications and streamline workflows. The answer? Low code development.

Posted July 27, 2022

Cloudian announced that HyperStore object storage is now validated to work with Microsoft Azure Stack HCI, giving Azure Stack HCI customers the scalability and flexibility benefits of public cloud in a secure and cost-effective, cloud-native storage platform within their own data centers.

Posted July 26, 2022

MANTA, the data lineage platform, is launching Release 37, offering new enhancements to data lineage platform that include modeling tool and scanner upgrades to offer efficient data lineage experiences for customers. With Release 37, MANTA further upgrades its platform to meet the modeling requirements of users scanning new properties into MANTA Flow to infer end-to-end data lineage.

Posted July 26, 2022

Model9, a provider of cloud data management for the IBM Z mainframe, announced it will participate in The Open Mainframe Project, an open source initiative with 20 programs and working groups that enable collaboration across the mainframe community to develop shared tool sets and resources.

Posted July 25, 2022

IBM is expanding its Power10 server line with the introduction of mid-range and scale-out systems to modernize, protect, and automate business applications and IT operations. The new Power10 servers combine performance, scalability, and flexibility with new pay-as-you-go consumption offerings for clients looking to deploy new services quickly across multiple environments.

Posted July 25, 2022

Pecan AI, a provider of AI-based predictive analytics for BI analysts and business teams, is adding one-click model deployment and integration with common CRMs, marketing automation, and other core business systems. Pecan's customers can now take immediate actions based on the highly accurate predictions for future churn, lifetime value, demand and other customer-conversion metrics generated by Pecan, according to the vendor.

Posted July 21, 2022

Dataiku is releasing Dataiku 11, a pivotal update of the company's data science and AI platform that helps organizations deliver on the promise of Everyday AI. This packed release provides new capabilities for expert teams to deliver more value at scale, enables tech-savvy workers to take on more expansive challenges, helps non-technical workers more easily engage with AI, and provides strengthened AI Governance to ensure projects are robust, transparent, and ready for success at scale.

Posted July 20, 2022

TIE Kinetix, a provider of supply chain digitalization and a current member of Oracle PartnerNetwork (OPN), announced that TIE Kinetix FLOW Connector for Oracle Fusion Cloud Supply Chain and Manufacturing is available on Oracle Cloud Marketplace. TIE Kinetix FLOW Connector for Oracle Fusion Cloud Supply Chain and Manufacturing now extends TIE Kinetix's cloud-native solution, EDI-2-FLOW, where Oracle Cloud SCM users have the opportunity to benefit from a fully integrated EDI solution via a standard connector.

Posted July 20, 2022

NVIDIA is releasing a unified computing platform for speeding breakthroughs in quantum research and development across AI, HPC, health, finance, and other disciplines. The NVIDIA Quantum Optimized Device Architecture, or QODA, aims to make quantum computing more accessible by creating a coherent hybrid quantum-classical programming model.

Posted July 18, 2022

Solace, an enabler of event-driven architecture for real-time enterprises, is joining the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners which provide software solutions that run on or integrate with AWS.

Posted July 18, 2022

Kyligence's Intelligent Cloud OLAP Platform now offers support for Amazon EMR Serverless, a serverless option from Amazon Web Services (AWS) that makes it easier for data analysts and engineers to run open source big data analytics frameworks. Using automatic on-demand provisioning and scaling capabilities available through Amazon EMR Serverless, Kyligence Cloud cost effectively meets the changing processing requirements across all data volumes, according to the vendor.

Posted July 15, 2022

Grafana Labs is introducing the Kubernetes Monitoring solution for Grafana Cloud, enabling all levels of Kubernetes usage within an organization. Kubernetes Monitoring is available to all Grafana Cloud users, including those on the generous free tier. Grafana Cloud users can install the Grafana Agent onto their Kubernetes cluster(s) and in minutes, the Kube-state metrics will be shipped to Prometheus and Grafana, according to the vendor.

Posted July 15, 2022

Deci, the deep learning company harnessing AI to solve the AI efficiency gap, announced it has raised $25 million in a Series B funding round, enabling the company to expand its go-to-market activities, as well as further accelerate the company's R&D efforts. The funding round was led by global software investor Insight Partners, with participation from existing investors Square Peg, Emerge, Jibe Ventures, and Fort Ross Ventures, as well as new investor ICON. The investment comes just seven months after Deci secured $21 million in Series A funding, also led by Insight Partners, bringing Deci's total funding to $55.1 million.

Posted July 14, 2022

Ensono, an expert technology adviser and managed service provider, is acquiring AndPlus, a cloud native and data engineering firm. This acquisition continues Ensono's strategic investment in scaling its cloud and data engineering capabilities and reinforces the company's commitment to helping clients plan, build, migrate, and operate in the cloud, according to the vendor.

Posted July 14, 2022

AI is delivering new benefits and efficiencies to organizations through greater automation capabilities, ease of use, and accessibility—across a variety of use cases. Spurred by the COVID-19 pandemic and a host of other compounding factors such as climate change and sustainability, supply chain delays, the Great Resignation, and the war in Ukraine, companies are scrambling to take advantage of what AI has to offer during this upheaval.

Posted July 13, 2022

Following the integration and acquisition of several backup and recovery companies and solutions, Jungle Disk is rebranding as CyberFortress—a global company providing managed data backups built to prevent business disruption through rapid recovery. The acquisitions include KeepItSafe, LiveVault, and OffsiteDataSync from J2 Global.

Posted July 13, 2022

MANTA, the data lineage platform, is partnering with IBM to drive data-driven success for enterprise-level customers by providing MANTA's data lineage platform with IBM Cloud Pak for Data to offer businesses historical, indirect, and technical data lineage capabilities. MANTA's automated data lineage platform is designed to provide a line of sight into data environments by building a powerful map of all data flows, sources, transformations, and dependencies to help Improve data governance, streamline migration projects, and accelerate incident resolution.

Posted July 13, 2022

DataBank, a leading provider of enterprise-class colocation, interconnection, and managed services, is partnering with Corsa Security, a provider in automating firewall virtualization, to deploy, scale, and optimize its Palo Alto Networks ML-powered VM-Series Virtual Next-Generation Firewalls with speed, simplicity, and savings.

Posted July 13, 2022

Signal AI, a global External Intelligence company, is launching its External Intelligence Graph, a comprehensive view of an organization's external world built on real-time data and content. Signal AI's External Intelligence Graph maps the relationships between the things a modern organization needs to care about, like climate change, supply chain risk, or competitor intelligence, and highlights how an organization is "associated" to these important topics.

Posted July 12, 2022

Tue, 08 Feb 2022 06:23:00 -0600 en text/html https://www.dbta.com/Categories/Master-Data-Management-336.aspx
Killexams : You Got Something On Your Processor Bus: The Joys Of Hacking ISA And PCI

Although the ability to expand a home computer with more RAM, storage and other features has been around for as long as home computers exist, it wasn’t until the IBM PC that the concept of a fully open and modular computer system became mainstream. Instead of being limited to a system configuration provided by the manufacturer and a few add-ons that really didn’t integrate well, the concept of expansion cards opened up whole industries as well as a big hobbyist market.

The first IBM PC had five 8-bit expansion slots that were connected directly to the 8088 CPU. With the IBM PC/AT these expansion slots became 16-bit courtesy of the 80286 CPU it was built around. These slots  could be used for anything from graphics cards to networking, expanded memory or custom I/O. Though there was no distinct original name for this card edge interface, around the PC/AT era it got referred to as PC bus, as well as AT bus. The name Industry Standard Architecture (ISA) bus is a retronym created by PC clone makers.

With such openness came the ability to relatively easy and cheaply make your own cards for the ISA bus, and the subsequent and equally open PCI bus. To this day this openness allows for a vibrant ecosystem, whether one wishes to build a custom ISA or PCI soundcard, or add USB support to a 1981 IBM PC system.

But what does it take to get started with ISA or PCI expansion cards today?

The Cost of Simplicity

From top to bottom: 8-bit XT bus, 16-bit AT/ISA, 32-bit EISA.

An important thing to note about ISA and the original PC/AT bus is that it isn’t so much a generic bus as it describes devices hanging off an 8088 or 80286 addressing and data bus. This means that for example that originally the bus is as fast as the clock speed of the CPU in question: 4.77 MHz for the original PC bus and 6-8 MHz for the PC/AT. Although 8-bit cards could be used in 16-bit slots most of the time, there was no ensure that they would work properly.

As PC clone vendors began to introduce faster CPUs in their models, the AT bus ended up being clocked at anywhere from 10 to 16 MHz. Understandably, this led to many existing AT (ISA) bus cards not working properly in those systems. Eventually, the clock for the bus was decoupled from the processor clock by most manufacturers, but despite what the acronym ‘ISA’ suggests, at no point in time was ISA truly standardized.

It was however attempted to standardize a replacement for ISA in the form of Extended ISA (EISA). Created in 1988, this featured a 32-bit bus, running at 8.33 MHz. Although it didn’t take off in consumer PCs, EISA saw some uptake in the server market, especially as a cheaper alternative to IBM’s proprietary Micro Channel architecture (MCA) bus. MCA itself was envisioned by IBM as the replacement of ISA.

Ultimately, ISA survives to this day in mostly industrial equipment and embedded applications (e.g. the LPC bus), while the rest of the industry moved on to PCI and to PCIe much later. Graphics cards saw a few detours in the form of VESA Local Bus (VLB) and Accelerated Graphics Port (AGP), which were specialized interfaces aimed at the needs of GPUs.

Getting started with new old tech

The corollary of this tumultuous history of ISA in particular is that one has to be careful when designing a new ‘ISA expansion card’. For truly wide compatibility, one could design an 8-bit card that can work with a bus speed from anywhere from 4.77 to 20 MHz. Going straight to a 16-bit card would be an option if one has no need to support 8088-based PCs. When designing a PC/104 card, there should be no compatibility issues, as it follows pretty much the most standard form of the ISA bus.

The physical interface is not a problem with either ISA or PCI, as both use edge connectors. These were picked mostly because they were cheap yet reliable, which hasn’t changed today. On the PCB end, no physical connector exists, merely the conductive ‘fingers’ that contact the contacts of the edge connector. One can use a template for this part, to get good alignment with the contacts. Also keep in mind the thickness of the PCB as the card has to make good contact. Here the common 1.6 mm seems to be a good match.

One can easily find resources for ISA and PCI design rules online if one wishes to create the edge connector themselves, such as this excellent overview on the Multi-CB (PCB manufacturer, no affiliation) site. This shows the finger spacing, and the 45 degrees taper on the edge, along with finger thickness  and distance requirements.

Useful for the electrical circuit design part is to know that ISA uses 5 V level signaling, whereas PCI can use 5 V, 3.3 V or both. For the latter, this difference is indicated using the placement of the notch in the PCI slot, as measured from the IO plate: at 56.21 mm for 3.3 V cards and 104.47 mm for 5 V. PCI cards themselves will have either one of these notches, or both if they support both voltages (Universal card).

PCI slots exist in 32-bit and 64-bit versions, of which only the former made a splash in the consumer market. On the flip-side of PCI we find PCI-X: an evolution of PCI, which saw most use in servers in its 64-bit version. PCI-X essentially doubles the maximum frequency of PCI (66 to 133 MHz), while removing 5V signaling support. PCI-X cards will often work in 3.3V PCI slots for this reason, as well as vice-versa. A 64-bit card can fall back to 32-bit mode if it is inserted into a shorter, 32-bit slot, whether PCI or PCI-X.

Driving buses

Every device on a bus adds a load which a signaling device has to overcome. In addition, on a bus with shared lines, it’s important that individual devices can disengage themselves from these shared lines when they are not using them. The standard way to deal with this is to use a tri-state buffer, such as the common 74LS244. Not only does it provide the isolation provided by a standard digital buffer circuit, it can also switch to a Hi-Z (high-impedance) state, in which it is effectively disconnected.

In the case of our ISA card, we need to have something like the 74LS244 or its bi-directional sibling 74LS245 to properly interface with the bus. Each bus signal connection needs to have an appropriate buffer or latch placed on it, which for the ISA bus is covered in detail in this article by Abhishek Dutta. A good example of a modern-day ISA card is the ‘Snark Barker’ SoundBlaster clone.

PCI could conceivably be done in such a discrete manner as well, but most commonly commercial PCI cards used I/O accelerator ASICs, which provide a simple, ISA-like interface to the card’s circuitry. These ICs are however far from cheap today (barring taking a risk with something like the WCH CH365), so a good alternative is to implement the PCI controller in an FPGA. The MCA version of the aforementioned ‘Snark Barker’ (as previously covered by us) uses a CPLD to interface with the MCA bus. Sites like OpenCores feature existing PCI target projects one could use as a starting point.

Chatting with ISA and PCI

After creating a shiny PCB with gold edge contact fingers and soldering some bus buffer ICs or an FPGA onto it, one still has to be able to actually talk the actual ISA or PCI protocol. Fortunately, a lot of resources exist for the ISA protocol, such as this one for ISA. The PCI protocol is, like the PCIe protocol, a ‘trade secret’, and only officially available via the PCI-SIG website for a price. This hasn’t kept copies from the specification to leak over the past decades, however.

It’s definitely possible to use existing ISA and PCI projects as a template or reference for one’s own projects. The aforementioned CPLD/FPGA projects are a way to avoid implementing the protocol oneself and just getting to the good bits. Either way, one has to use the interrupt (IRQ) system for the respective bus (dedicated signal lines, as well as message-based in later PCI versions), with the option to use DMA (DRQn & DACKn on ISA). Covering the intricacies of the ISA and PCI bus would however take a whole article by itself. For those of us who have had ISA cards with toggle switches or (worse), ISA PnP (Plug’n’Pray) inflicted on them, a lot of this should already be familiar, however.

As with any shared bus, the essential protocol when writing or memorizing involves requesting bus access from the bus master, or triggering the bus arbitration protocol with multiple bus masters in PCI. An expansion card can also be addressed directly using its bus address, as Abhishek Dutta covered in his ISA article, which on Linux involves using kernel routines (sys/io.h) to obtain access permissions before one can send data to a specific IO port on which the card can be addressed. Essentially:

if (ioperm(OUTPUT_PORT, LENGTH+1, 1)) {
        ...
}
if (ioperm(INPUT_PORT, LENGTH+1, 1)) {
        ...
}

outb(data, port);
data = inb(port);

With ISA, the IO address is set in the card, and the address decoder on the address signal lines used to determine a match. Often toggle switches or jumpers were used to allow a specific address, IRQ and DMA line. ISA PnP sought to Improve on this process, but effectively caused more trouble. For PCI, PnP is part of the standard: the PCI bus is scanned for devices on boot, and the onboard ROM (BIOS) queried for the card’s needs after which the address and other parameters are set up automatically.

Wrapping up

Obviously, this article has barely even covered the essentials when it comes to developing one’s own custom ISA or PCI expansion cards, but hopefully it has at least given a broad overview of the topic. A lot of what one needs depends on the type of card one wishes to develop, whether it’s a basic 8-bit ISA (PC/XT) card, or a 64-bit PCI-X one.

A lot of the fun with buses such as ISA and PCI, however, is that they are very approachable. Their bus speeds are well within the reach of hobbyist hardware and oscilloscopes in case of debugging/analysis. The use of a slower parallel data bus means that no differential signaling is used which simplifies the routing of traces.

Even though these legacy buses are not playing in the same league as PCIe, their feature set and accessibility means that it can provide old systems a new lease on life, even if it is for something as simple as adding Flash-based storage to an original IBM PC.

[Heading image: Snark Barker ISA SoundBlaster clone board. Credit: Tube Time]

Sat, 09 Jul 2022 12:00:00 -0500 Maya Posch en-US text/html https://hackaday.com/2021/01/06/you-got-something-on-your-processor-bus-the-joys-of-hacking-isa-and-pci/
Killexams : SD Times news digest: Android ML inference stack, IBM to acquire BoxBoat Technologies, Aqua Security acquires tfsec

Android announced its updateable, fully-integrated ML inference stack for developers to get built-in on-device inference essentials, optimal performance on all devices and a consistent API that spans Android versions. 

TensorFlow Lite will be available on all devices with Google Play Services and will no longer require developers to include the runtime in their apps. 

Also, automatic acceleration is a new feature in TensorFlowLite for Android that enables per-model testing to create allowlists for specific devices taking performance, accuracy and stability into account. 

IBM to acquire BoxBoat Technologies

IBM announced plans to acquire BoxBoat Technologies, a DevOps consultancy and enterprise Kubernetes certified service provider. 

“Our clients require a cloud architecture that allows them to operate across a traditional IT environment, private cloud and public clouds. That’s at the heart of our hybrid cloud approach,” said John Granger, senior vice president of Hybrid Cloud Services at IBM. “No cloud modernization project can succeed without a containerization strategy, and BoxBoat is at the forefront of container services innovation.”

BoxBoat will join IBM Global Business Services’ Hybrid Cloud Services business to enhance IBM’s capacity to meet rising client demand for container strategy. 

Additional details are available here

Aqua Security acquires tfsec 

Aqua Security announced that it is acquiring the cloud security company tfsec to add infrastructure as code (IaC) security capabilities to its open-source portfolio and cloud-native security platform. 

The unique approach tfsec takes to loading code ensures that one’s IaC is interpreted exactly as Terraform does, meaning that regardless of complexity, users get a comprehensive view of any vulnerabilities before deployment, according to Aqua Security

“Aqua Trivy has become the industry standard for open source vulnerability scanning thanks to its simple user experience and rich functionality. Now Trivy brings the same superior experience into Infrastructure as Code scanning to provide even more value to container and code scanning,” says Itay Shakury, the director of open source at Aqua Security. “By integrating tfsec and Trivy, our users can scan code repositories and container images for vulnerabilities and IaC configuration issues – all using a single tool, that can integrate into their CI tool or even be used as a Github action.”

Devart adds new data connectivity tool

Devart added a new tool to their data connectivity product line, ODBC Driver for Hubspot, which has enterprise-level features for accessing HubSpot from ODBC-compliant reporting, analytics, BI and ETL tools.

The tool provides full support for standard ODBC API functions and data types and for all HubSpot objects and data types. It can also be connected to HubSpot directly through HTTPS or through a proxy server.

“Our ODBC driver is a standalone installation file that doesn’t require the user to deploy and configure any additional software such as a database client or a vendor library. Deployment costs are reduced drastically, especially when using the silent install method with an OEM license in large organizations that have hundreds of machines,” the company stated on its website.

Apache weekly update

This week at the Apache Software Foundation (ASF) saw the release of ShardingSphere ElasticJob 3.0.0, an ecosystem that consists of a set of distributed database solutions, including 3 independent products, JDBC, Proxy & Sidecar (Planning).

Also new this week are AntUnit 1.4.1, CloudStack 4.15.1.0 LTS, Tika 1.27, UIMA Java SDK 2.11.0, Qpid Proton 0.35.0, Dispatch 1.16.1 and more. Apache Sqoop is now retired. 

Additional details on all of the new releases from the ASF are available here.

Mon, 11 Jul 2022 12:01:00 -0500 en-US text/html https://sdtimes.com/android/sd-times-news-digest-android-ml-inference-stack-ibm-to-acquire-boxboat-technologies-aqua-security-acquires-tfsec/
Killexams : Tech Earnings Season: 5 Things That Have Stood Out So Far

While earnings season is far from over, enough tech companies have reported to provide some feel for how sales are trending in many parts of the sector.

Here are a few of the things that have stood out as tech companies large and small have reported over the last few weeks:

1. Chip Demand Is Falling in Some Markets, While Holding Up Well in Others

Companies such as Micron Technology   (MU) and Taiwan Semiconductor  (TSM)  have made it pretty clear -- just in case all the other evidence wasn't enough -- that consumer demand for PCs, smartphones and other tech/electronics products has been softening, both due to macro pressures and shifts in consumer spending from goods to services (all of which has particularly weighed on demand for low-end products). More recently, Seagate's (STX) weak results/guidance and Corsair Gaming's (CRSR)  warning have signaled a weakening in demand for chips and components going into consumer tech hardware.

And in some non-consumer markets, OEMs have begun paring chip/component inventories -- often after building them up over the last two years amid shortages -- even though end-demand is still fairly healthy. Seagate indicated on Thursday many clients are poised to cut their hard-drive inventories (Chinese customers especially). And on Friday, Morgan Stanley's Joseph Moore reported (while downgrading Micron to an "Underweight" rating) Micron customers "are taking a more aggressive approach to inventory management" after Micron said on its June 30 earnings call that its own inventories will grow in the near-term.

On the other hand, both Micron and Taiwan Semi indicated they're still seeing good end-demand from data center and automotive end-markets. And whereas Micron and Seagate issued soft quarterly sales guidance, Taiwan Semi issued above-consensus quarterly guidance and hiked its full-year outlook.

In a chip demand environment like this, I think there's value in staying selective about which chip suppliers one invests in. On the whole, companies whose sales skew towards auto, industrial and/or cloud data center end-markets -- and which aren't selling commodity products prone to seeing big price drops when demand starts falling short of supply -- look relatively well-positioned.

2. Chip Equipment Demand Still Doesn't Look Bad Overall

Chip equipment stocks plunged following Micron's June 30 earnings report, after the memory giant said (amid weakening PC/smartphone memory demand) that it's cutting its capex plans for fiscal 2023 (ends in Aug. 2023). But since then, news flow for the group has been much healthier.

During its Q2 earnings call, Taiwan Semi said it now expects its full-year capex to be near the low end of a guidance range of $40 billion to $44 billion (still well above 2021 capex of about $30 billion), but added this is due to equipment supply constraints and indicated it will also invest heavily in capex next year. Likewise, lithography equipment giant ASML (ASML)  cut its full-year sales guidance due to revenue recognition delays caused by supply constraints, but also reported strong backlog growth and indicated its capacity is largely booked through 2023. And a couple of smaller chip equipment makers, Camtek  (CAMT) and Axcelis Technologies (ACLS) , respectively said they now expect their Q2 sales to be at the high end and above their prior guidance ranges.

Admittedly, BE Semiconductor (BESIY) , a provider of chip assembly equipment, did issue soft Q3 guidance. And it wouldn't be surprising to see other memory makers, such as Samsung and SK Hynix, also signal that they plan to cut their memory capex.

Nonetheless, demand for wafer fabrication equipment (WFE) among non-memory chip manufacturers still looks pretty solid, thanks to factors such as greater capital-intensity for leading-edge manufacturing processes, catch-up spend for mature processes and efforts (aided by subsidies) to localize more chip production. And with many chip equipment makers now sporting high-single-digit or low-double-digit forward P/Es, their shares now arguably have a low bar to clear.

3. Software Spend Is Softening a Bit

IBM's (IBM) software division missed its Q2 revenue consensus, and (after accounting for an increase in the forex hit the company expects this year) Big Blue lowered its full-year, dollar-based, revenue guidance. Meanwhile, SAP (SAP) effectively did the same by keeping its full-year, euro-based, revenue guidance unchanged, and said on its call that its sales of traditional software licenses are getting stung as macro uncertainty accelerates the long-term shift towards cloud software spend.

One could point out here that IBM has been a long-time share donor in software (among other places), and that SAP's commentary doesn't sound that bad for cloud software/SaaS pure-plays. But cloud customer survey software provider Qualtrics (XM) also lowered its full-year guide, while mentioning on its call that it's seeing some lengthening deal cycles, and Bill McDermott, CEO of cloud IT service management software giant ServiceNow (NOW) , also suggested macro fears are affecting deal activity.. And Qualtrics and ServiceNow's commentary is increasingly backed up by sell-side research and other data pointing to reduced software deal activity.

Software is still taking IT spending share, and the reliance of SaaS businesses on recurring revenue streams does protect them some during a downturn (not to mention appeal to potential acquirers). But with deal activity apparently slowing -- perhaps more so outside of high-priority areas such as security -- more guidance/estimate cuts for the sector are likely on the way. And while some software firms are now arguably pricing in some bad news, some still carry elevated valuations.

4. Online Ad Spend Is Getting Hit Hard - Particularly for More Discretionary Types of Ad Buys

Snap's (SNAP)  Q2 shareholder letter -- in which the company declined to provide Q3 guidance and said its Q3 revenue is flat year-over-year to date -- more than confirmed fears that digital ad budgets are getting cut as various businesses tighten their belts. Twitter's (TWTR)  Q2 report, in which the company posted a $140 million revenue miss and (citing its pending/disputed deal to be acquired by Elon Musk) declined to provide Q3 guidance, also didn't do much to calm investor nerves.

It's worth noting that both Snap and Twitter have strong exposure to brand ads and app-install ads. The former has long been an early casualty when businesses get nervous about macro conditions, and the latter is apparently getting stung by a mixture of macro pressures, Apple ( AAPL) user-tracking policy changes and much tougher financial conditions for many public and private tech companies.

Demand trends might not be quite as bad for some larger online ad players. Last week, online ad agency Tinuiti shared reasonably good Q2 data for its clients' Google (GOOGL) search ad spend, albeit while reporting a meaningful drop in the annual growth rate for their YouTube ad spend. Nonetheless, at a time when many firms are eager to cut costs and a tight job market often makes them reluctant to conduct major layoffs, it's easy to see many of them paring their ad/marketing spend, at least for a little while.

5. A Strong Dollar Is a Big Headwind for U.S. Multinationals

This shouldn't be a shock to anyone who has been tracking the dollar's performance against currencies such as the euro and the yen. But all the same, some of the forex hits being disclosed this earnings season are pretty eye-popping.

Forex was a 7-percentage-point headwind to IBM's Q2 sales growth, and a 4-point headwind to Netflix's (NFLX) Q2 growth. In addition, the companies respectively forecast 8-point and 7-point forex headwinds for Q3.

Look for a number of other U.S. tech companies with significant international sales to report seeing similar top-line pressures on account of a strong dollar.

(AAPL and GOOGL are holdings in the Action Alerts PLUS member club. Want to be alerted before AAP buys or sells these stocks? Learn more now. )

Get an email alert each time I write an article for Real Money. Click the "+Follow" next to my byline to this article.

Mon, 25 Jul 2022 01:20:00 -0500 ERIC JHONSA en text/html https://realmoney.thestreet.com/investing/technology/tech-earnings-season-5-things-that-have-stood-out-so-far-16060370
Killexams : Intel’s ATX12VO Standard: A Study In Increasing Computer Power Supply Efficiency

The venerable ATX standard was developed in 1995 by Intel, as an attempt to standardize what had until then been a PC ecosystem formed around the IBM AT PC’s legacy. The preceding AT form factor was not so much a standard as it was the copying of the IBM AT’s approximate mainboard and with it all of its flaws.

With the ATX standard also came the ATX power supply (PSU), the standard for which defines the standard voltage rails and the function of each additional feature, such as soft power on (PS_ON).  As with all electrical appliances and gadgets during the 1990s and beyond, the ATX PSUs became the subject of power efficiency regulations, which would also lead to the 80+ certification program in 2004.

Starting in 2019, Intel has been promoting the ATX12VO (12 V only) standard for new systems, but what is this new standard about, and will switching everything to 12 V really be worth any power savings?

What ATX12VO Is

As the name implies, the ATX12VO standard is essentially about removing the other voltage rails that currently exist in the ATX PSU standard. The idea is that by providing one single base voltage, any other voltages can be generated as needed using step-down (buck) converters. Since the Pentium 4 era this has already become standard practice for the processor and much of the circuitry on the mainboard anyway.

As the ATX PSU standard moved from the old 1.x revisions into the current 2.x revision range, the -5V rail was removed, and the -12V rail made optional. The ATX power connector with the mainboard was increased from 20 to 24 pins to allow for more 12 V capacity to be added. Along with the Pentium 4’s appetite for power came the new 4-pin mainboard connector, which is commonly called the “P4 connector”, but officially the “+12 V Power 4 Pin Connector” in the v2.53 standard. This adds another two 12 V lines.

Power input and output on the ASRock Z490 Phantom Gaming 4SR, an ATX12VO mainboard. (Credit: Anandtech)

In the ATX12VO standard, the -12 V, 5 V, 5 VSB (standby) and 3.3 V rails are deleted. The 24-pin connector is replaced with a 10-pin one that carries three 12 V lines (one more than ATX v2.x) in addition to the new 12 VSB standby voltage rail. The 4-pin 12 V connectors would still remain, and still require one to squeeze one or two of those through impossibly small gaps in the system’s case to get them to the top of the mainboard, near the CPU’s voltage regulator modules (VRMs).

While the PSU itself would be somewhat streamlined, the mainboard would gain these VRM sections for the 5 V and 3.3 V rails, as well as power outputs for SATA, Molex and similar. Essentially the mainboard would take over some of the PSU’s functions.

Why ATX12VO exists

A range of Dell computers and server which will be subject to California’s strict efficiency regulations.

The folk over at GamersNexus have covered their research and the industry’s thoughts on the Topic of ATX12VO in an article and video that were published last year. To make a long story short, OEM system builders and systems integrators are subject to pretty strong power efficiency regulations, especially in California. Starting in July of 2021, new Tier 2 regulations will come into force that add more strict requirements for OEM and SI computer equipment: see 1605.3(v)(5) (specifically table V-7) for details.

In order to meet these ever more stringent efficiency requirements, OEMs have been creating their own proprietary 12 V-only solutions, as detailed in GamersNexus’ recent video review on the Dell G5 5000 pre-built desktop system. Intel’s ATX12VO standard therefore would seem to be more targeted at unifying these proprietary standards rather than replacing ATX v2.x PSUs in DIY systems. For the latter group, who build their own systems out of standard ATX, mini-ITX and similar components, these stringent efficiency regulations do not apply.

The primary question thus becomes whether ATX12VO makes sense for DIY system builders. While the ability to (theoretically) increase power efficiency especially at low loads seems beneficial, it’s not impossible to accomplish the same with ATX v2.x PSUs. As stated by an anonymous PSU manufacturer in the GamersNexus article, SIs are likely to end up simply using high-efficiency ATX v2.x PSUs to meet California’s Tier 2 regulations.

Evolution vs Revolution

Seasonic’s CONNECT DC-DC module connected to a 12V PSU. (Credit: Seasonic)

Ever since the original ATX PSU standard, the improvements have been gradual and never disruptive. Although some got caught out by the negative voltage rails being left out when trying to power old mainboards that relied on -5 V and -12 V rails being present, in general these changes were minor enough to incorporate these into the natural upgrade cycle of computer systems. Not so with ATX12VO, as it absolutely requires an ATX12VO PSU and mainboard to accomplish the increased efficiency goals.

While the possibility of using an ATX v2.x to ATX12VO adapter exists that passively adapts the 12 V rails to the new 10-pin connector and boosts the 5 VSB line to 12 VSB levels, this actually lowers efficiency instead of increasing it. Essentially, the only way for ATX12VO to make a lot of sense is for the industry to switch over immediately and everyone to upgrade to it as well without reusing non-ATX12VO compatible mainboards and PSUs.

Another crucial point here is that OEMs and SIs are not required to adopt ATX12VO. Much like Intel’s ill-fated BTX alternative to the ATX standard, ATX12VO is a suggested standard that manufacturers and OEMs are free to adopt or ignore at their leisure.

Important here are probably the obvious negatives that ATX12VO introduces:

  • Adding another hot spot to the mainboard and taking up precious board space.
  • Turning mainboard manufacturers into PSU manufacturers.
  • Increasing the cost and complexity of mainboards.
  • Routing peripheral power (including case fans) from the mainboard.
  • Complicating troubleshooting of power issues.
Internals of Seasonic’s CONNECT modular power supply. (Credit: Tom’s Hardware)

Add to this potential alternatives like Seasonic’s CONNECT module. This does effectively the same as the ATX12VO standard, removing the 5 V and 3.3 V rails from the PSU and moving them to an external module, off of the mainboard. It can be fitted into the area behind the mainboard in many computer cases, making for very clean cable management. It also allows for increased efficiency.

As PSUs tend to survive at least a few system upgrades, it could be argued that from an environmental perspective, having the minor rails generated on the mainboard is undesirable. Perhaps the least desirable aspect of ATX12VO is that it reduces the modular nature of ATX-style computers, making them more like notebook-style systems. Instead, a more reasonable solution here might be that of a CONNECT-like solution which offers both an ATX 24-pin and ATX12VO-style 10-pin connectivity option.

Thinking larger

In the larger scheme of power efficiency it can be beneficial to take a few steps back from details like the innards of a computer system and look at e.g. the mains alternating current (AC) that powers these systems. A well-known property of switching mode power supplies (SMPS) like those used in any modern computer is that they’re more efficient at higher AC input voltages.

Power supply efficiency at different input voltages. (Credit: HP)

This can be seen clearly when looking for example at the rating levels for 80 Plus certification. Between 120 VAC and 230 VAC line voltage, the latter is significantly more efficient. To this one can also add the resistive losses from carrying double the amps over the house wiring for the same power draw at 120 V compared to 230 VAC. This is the reason why data centers in North America generally run on 208 VAC according to this APC white paper.

For crypto miners and similar, wiring up their computer room for 240 VAC (North American hot-neutral-hot) is also a popular topic, as it directly boosts their profits.

Future Outlook

Whether ATX12VO will become the next big thing or fizzle out like BTX and so many other proposed standards is hard to tell. One thing which the ATX12VO standard has against it is definitely that it requires a lot of big changes to happen in parallel, and the creation of a lot of electronic waste through forced upgrades within a short timespan. If we consider that many ATX and SFX-style PSUs are offered with 7-10 year warranties compared to the much shorter lifespan of mainboards, this poses a significant obstacle.

Based on the sounds from the industry, it seems highly likely that much will remain ‘business as usual’. There are many efficient ATX v2.x PSUs out there, including 80 Plus Platinum and Titanium rated ones, and Seasonic’s CONNECT and similar solutions would appeal heavily to those who are into neat cable management. For those who buy pre-built systems, the use of ATX12VO is also not relevant, so long as the hardware is compliant to all (efficiency) regulations. The ATX v2.x standard and 80 Plus certification are also changing to set strict 2-10% load efficiency targets, which is the main target with ATX12VO.

What would be the point for you to switch to ATX12VO, and would you pick it over a solution like Seasonic CONNECT if both offered the same efficiency levels?

(Heading image: Asrock Z490 Phantom Gaming 4SR with SATA power connected, credit: c’t)

Fri, 05 Aug 2022 12:00:00 -0500 Maya Posch en-US text/html https://hackaday.com/2021/06/07/intels-atx12vo-standard-a-study-in-increasing-computer-power-supply-efficiency/
Killexams : Lenovo Brings a Decade of Liquid Cooling Experience to the Faster, Denser, Hotter HPC Systems of the Future

Lenovo ThinkSystem SD650-N-V2 with Neptune warm water cooling

[SPONSORED CONTENT]  HPC systems customers (and vendors) are in permanent pursuit of more compute power with equal or greater node density. But with that comes more power consumption, greater heat generation and rising cooling costs. Because of this, the IT business – with a boost from the HPC and hyperscale segments – is spiraling up the list of industries ranked by power consumption. According to ITProPortal, data center power use is expected to jump 50 percent by 2030.

The combination of higher electrical consumption and costs, and higher carbon emissions is viewed with increasing alarm, and has become a limiting factor for HPC. Consider this: with the annual electric bill for an exascale system expected to approach $20 million, it’s been argued that the next great supercomputing throughput milestone, zettascale (1,000 exaFLOPS), is a practical impossibility using current technologies and power sources.

In the face of this bleak, high-consumption and high-carbon future, the HPC server market has increasingly turned to energy efficient liquid cooling to hold down energy costs. The transition away from air cooling initially was regarded as a risky proposition. But that outlook has changed significantly as water cooling technologies have matured as they have been implemented on a multi-generational basis at supercomputing centers housing some of the world’s most powerful and expensive HPC systems.

An early liquid cooling innovator in HPC, systems maker Lenovo dates its first major water-cooled installation to 2012 at one of Europe’s biggest supercomputing centers (more on this below). The company’s line of Neptune™ liquid cooling technologies provide a three-pronged cooling approach that can be used together or independently: direct warm-water cooling (DWC), liquid-assisted air cooling, and rear-door heat exchanger (RDHX) , along with other technologies like software designed to run systems more efficiently.

Lenovo leads the HPC server industry in the use of warm water cooling – the warmer the water, the less energy is expended chilling it either before or after it flows through servers. You might not think 122-degree (Fahrenheit) water could cool a server, but Lenovo’s doing it. The company also is developing water-recycling capabilities that could move HPC centers toward carbon-neutral status, possibly even carbon-negative in the future.

Another point of distinction is that Neptune™ DWC technologies utilize leak-resistant copper tubing to circulate water through more system components than anyone else.  This comprehensive approach to liquid cooling removes more than 90 percent of the heat generated by the server.

Let’s look at Lenovo’s highest performance, most densely packaged server, the fan-free ThinkSystem SD650-N V2 GPU server with Neptune™ direct warm water-cooling technology, an HPC-AI/hyperscale system. It utilizes water up to 50⁰C/122⁰F to remove heat from two 3rd Gen Intel Xeon Scalable CPUs, four NVIDIA HGX A100 GPUs and NVIDIA HDR InfiniBand networking, along with memory, network interface controllers, local storage and voltage regulators.

Still image from animation of ThinkSystem SD650-N V2 liquid cooling

Another benefit: compared with air cooling – with high-rev fans and air conditioners roaring away – water cooling is much quieter. So along with reduced greenhouse gas pollution, the ThinkSystem SD650-N V2 has less nerve-wracking noise pollution.

The server delivers up to 30 to 40 percent data center cooling cost reduction, Lenovo reports, and supports PUE ratings below 1.1 depending on the data center design. It also enables data center growth without adding more Computer Room Air Conditioning (CRAC) units. And because liquid cooling keeps servers operating at lower temperatures, Neptune™ extends the lifespans of parts and servers, according to Lenovo.

A single, standard 42U rack holds 36 of these servers and delivers up to 2PFLOPs of compute performance, enough to earn a spot on the current TOP500 list of the world’s most powerful supercomputers.

Looking ahead, the industry faces steepening cooling challenges as the power drawn by CPUs, GPUs and even memory DIMMs and NICs steadily climbs. In 2006, 20kW were required to power a 56-node, 224-core rack for a Lenovo HPC system installed at Eli Lilly; by 2018, the 72-node/3,456-core racks within the Lenovo SuperMUC-NG supercomputer at the Leibniz Supercomputing Centre (LRZ) in Munich consumed 46kW per rack. The ThinkSystem SD650-N V2 comes in at 80 kW per rack, and Lenovo anticipates that by 2024 its high-end systems will consume 180 kW.

SuperMUC-NG supercomputer

LRZ has been a pioneer in energy-efficient supercomputing for over a decade.  From 2012 to the end of this year, four generations of IBM/Lenovo water-cooled supercomputers have been stood up at LRZ. It’s an envelope-pushing site utilizing Lenovo’s most advanced liquid technologies, which the company has then extended to the broader HPC industry.

A key to Lenovo’s cooling leadership is its long experience advancing liquid-related technologies, according to Lenovo’s Martin Hiegl, Director, HPC Customer Solutions. Take, for example, the heatsinks used in Lenovo HPC servers.

“The shiny copper water loop and big manifolds are the most visible part of Lenovo Neptune,” Hiegl said. “The secret sauce lies, however, within the layout across a system for stable cooling capability with low pressure across the different heat sources and even the tiny details like the microfins within the heat sink itself. The more than a decade of experience our Lenovo engineers bring to the table makes them industry leading in their designs.”

In addition, Hiegl said Lenovo engineers focus on achieving consistent operational temperatures across and among processors.

“For example, between the different CPUs you want to maintain temperature balance,” he said. “That’s why our water loops on the node are very carefully designed to bring optimal cooling to the different heat sources so that you don’t have one CPU running at 80 degrees Celsius and another CPU running at 90 degrees Celsius, which can create thermal jitter with different performance between the two CPUs on the same node. We design our systems specifically to avoid that. Our decade of experience doing this is something no one else brings to the table.”

Looking back at 2012 – when HPC-class servers only had CPUs and generated much less heat – liquid cooling was a new approach that at that time made some people nervous. Scott Tease, Lenovo’s Vice President and General Manager, HPC & AI, was part of the team that installed the company’s first supercomputer at LRZ.

Lenovo’s Scott Tease

“We were freaking out a little bit, it was 9700 nodes, liquid cooled for the first time ever, and it got us nervous,” he told StorageReview in a podcast interview. “But it’s been an incredible story, and ever since, the customer has been happy. Some of those nodes are just now coming out of production…, that’s how long it’s been in production. But what we’re seeing with Neptune and with liquid cooling in general is that the reasons to go towards liquids are even (stronger) than a decade ago.”

He said LRZ had compelling cost motives for making the jump to liquid since power costs are two times more in Germany compared to the U.S. “So every time they could drive power consumption out it had a pretty big benefit for them on their energy bill,” Tease said, with savings amounting to hundreds of thousands of Euros per month.

Bottom line: LRZ estimates liquid cooling and all the optimizations with Lenovo around it has reduced their energy costs by 30 percent.

Longer term, Lenovo wants to work with customers like LRZ that already recycle water heated by HPC systems for such purposes as heating buildings and generating colder water through adsorption technology for an even wider cooling impact, in combination with other renewable energy sources, to eliminate carbon emission altogether.

Tease said such aspirations support the growing sustainability ethic taking hold in the HPC community, with liquid cooling playing a key role. “That’s been surprising to me, how broad it is globally,” he said. “People see liquid cooling and its advantages from an energy efficiency, carbon reduction standpoint. It’s resonating universally, globally.”

Fri, 05 Aug 2022 03:27:00 -0500 Doug Black en-US text/html https://insidehpc.com/2022/08/lenovo-brings-a-decade-of-liquid-cooling-experience-to-the-faster-denser-hotter-hpc-systems-of-the-future/
Killexams : Stereotaxis, Inc. (STXS) CEO David Fischel on Q2 2022 Results - Earnings Call Transcript

Stereotaxis, Inc. (NYSE:STXS) Q2 2022 Earnings Conference Call August 9, 2022 10:00 AM ET

Company Representatives

David Fischel - Chairman, Chief Executive Officer

Kim Peery - Chief Financial Officer

Conference Call Participants

Josh Jennings - Cowen

Adam Maeder - Piper Sandler

Neil Chatterji - B. Riley

Alex Nowak - Craig-Hallum Capital Group

Frank Takkinen - Lake Street Capital Markets

Nathan Weinstein - Aegis Capital

Javier Fonseca - Spartan Capital

Operator

Good morning! Thank you for joining us for Stereotaxis’ Second Quarter 2022 Earnings Conference Call.

Certain statements during the conference call and question-and-answer session period to follow may relate to future events, expectations and as such constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995.

Such statements involve known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements of the company in the future to be materially different from the statements that the company’s executives may make today. These risks are described in detail in our public filings with the Securities and Exchange Commission, including our latest periodic report on Form 10-K or 10-Q. We assume no duty to update these statements.

At this time all participants have been placed on a listen-only-mode. The floor will be open for questions and comments following the presentation. As a reminder, today’s call is being recorded.

It is now my pleasure to turn the floor over to your host, David Fischel, Chairman and CEO of Stereotaxis.

David Fischel

Thank you, operator and good morning everyone. I'm joined today by Kim Peery our Chief Financial Officer.

We are operating in an environment that remains very similar to what I described on our last call in May. It is both a challenging and exciting period for Stereotaxis. The macro business environment remains littered with a host of pandemic related supply chain, regulatory, personnel and economic disruptions.

We saw nearly 70% reduction in China procedure volumes during the second quarter, continued to see delays to hospital purchasing decisions, and construction projects, have not yet observed an improvement in supply chain reliability, and seen inflationary pressures on various expenses. The optics of our financial results in the quarter reflect these challenges, particularly the delays in hospital construction, with negligible system revenue recognized in the quarter, contributing to reduced revenue compared to last year's second quarter.

Despite these pressures and the poor optics of our financial results, Stereotaxis is making significant progress commercially and technologically. I am pleased with our progress and confident in where we stand and the path ahead of us. We have continued demand for our technology; we are advancing a transformative innovation pipeline. We are assembling an all-star commercial team and we're doing all this while maintaining financial stability and strength.

Let me first touch upon our return commercial performance. During the second quarter we received three orders for Genesis systems, two of which we received since four last call. All three of the orders came from the United States. Two order are replacement cycle systems where Genesis will replace aged Niobes at existing hospitals.

The third order is unique and exciting. An existing hospital customer who already operates a successful robotic EP program, decided to establish a second robotic lab at the same hospital. This is a prestigious hospital led by a key opinion leader in the field, and will be the first EP program in the U.S. with two of our robotic systems.

Our continued pace of Genesis orders bodes well for future financial results, as we now have over $12 million in backlog of system orders waiting to be shipped, installed and converted into revenue. While timing of revenue recognition is often outside of our control and dependent on hospital construction, orders in our backlog are essentially guaranteed with an over 99% conversion rate and significant non-refundable down-payments, providing confidence in their realization.

Our efforts to grow capital sales are performed alongside a continued commitment to the success of existing robotic practices and the development of the holistic commercial infrastructure that drives such success. Two highlights from the second quarter include graduation of an additional cohort of fellows in our Robotic EP Fellows Program and the publication of the Robotics Special Issue in the Journal of Atrial.

11 Electrophysiology Fellows graduated from our Fellows program in the past quarter. We expect to graduate 19 this year and in total 65 Fellows have graduated from our program globally since it was launched. These Fellows represent the future of the field, and enter with appreciation and confidence our technology, which bodes well for us going forward.

The body of clinical literature supporting the clinical value of robotics and EP also substantially increased with 16 Peer-Reviewed Publications included in a special issue of The Journal of Atrial in June. The publications covered the broad range of topics, including the use of our technology across the spectrum of arrhythmias and in several more novel way alongside pre-operative imaging and interoptive mapping technology, without the use of X-Ray and remotely over long distances. We continue to view the quantity and quality of clinical data on our technology as a strong foundation for future adoption.

Most impactful to our mid and long term commercial performance remains the realization of Stereotaxis strategic innovation plan. As a reminder, our innovation strategy consists of five key pillars; a mobile system that enables broad accessibility of robotics; our own independent ablation catheter portfolio; devices that expand our technology to new endovascular indications; China specific product ecosystem; and a digital platform for broad operating room connectivity. Each of these will individually service as substantial growth drivers that dwarf our existing business, but the five efforts are also synergetic and collectively serve as the foundational product ecosystem in our mission to transform endovascular surgery with robotics.

We were very pleased a month ago to announcer CE Mark’s submission of our proprietary robotically navigated ablation catheter MAGiC. Submission of MAGiC reflects the culmination of extensive design, development, manufacturing and testing effort, and I want to congratulate the many individuals who were instrumental in that effort.

The submission was made on schedule with the timeline we provided at the start of this year, and complies with the exact more stringent MDR Regulations in Europe. While the timeline for approval of the catheter is not knowable at this stage, we're preparing for commercialization upon receipt of CE Mark as early as year-end. The catheter design build upon nearly 20-years of experience and learning since the existing Biosense Magnetic Catheter was developed and we are very excited for the clinical, commercial and strategic benefits MAGiC will provide.

Beyond the significant milestone with MAGiC, we are methodically advancing the other technological pillars to our innovation strategy. These are advancing against the headwinds of supply chain challenges, of COVID quarantine in China, personnel disruption and the regulatory distractions caused by MDR in Europe. Despite those, we still view an initial launch of the mobile robot around this time next year, as realistic and the hardware electronics and software aspect of the system are advancing nicely.

The MicroPort and Stereotaxis' collaboration continues to progress well and we view a comprehensive product ecosystem in China, coming together during the second half of 2023. Ramping up production of guide wires for the required regulatory testing has gone slower than projected at our contract manufacturer, and we now expect regulatory approvals and an initial launch in the first half of 2023, rather than at the start of the year.

Finally, submission of an application to the FDA to initiate a prospective IDE trail for the MAGiC catheter is currently waiting on certain animal trial that we expect to complete by year end. Development progress is inherently nonlinear, particularly in this environment, but we are pleased by the breadth and quality of impactful developments being advanced.

The methodical progress across multiple fronts on our innovation strategy brings us closer to a commercial break out and consistent long term revenue growth. As our technology pipeline becomes derisked and approaches the market, we are placing increased focus on ensuring the right commercial team, infrastructure and processes are in place to drive substantial revenue growth.

I was very excited to be able to announce last week that two highly experienced and successful commercial leaders are joining Stereotaxis. Frank Van Hyfte and Tim Glynn bring to Stereotaxis decades of significant and highly relevant experience. They have scaled businesses like ours in order of magnitude larger, and lived through the complexity and rapid pace of pioneering new markets.

Their skill sets and geographical focuses are complementary to each other and are complementary and additive to our commercial leaders Michael Tropea and Casey Payne. That we were able to find leaders of this caliber to enthusiastically join us is a testament to the opportunity in front of us and the company we are building. I personally feel grateful to have these commercial leaders as partners in our journey, and encouraged by the fact that their leadership will guide our commercial activities.

The puzzle pieces are starting to come together on both the technological ecosystem and commercial organizations. Our progress on both these fronts supports substantial long term growth in Electric Physiology and more broadly in endovascular interaction.

Kim will now provide some commentary on our financial results, and then I'll make a few financial comments as well before opening the call to Q&A.

Kim Peery

Thank you, David and good morning everyone. Revenue for the second quarter of 2022 totaled $6.2 million. This was down from $9.1 million in the prior year second quarter, primarily due to recognizing revenue on just a partial robotic system this quarter, compared to two systems last year.

Recurring revenue for the quarter was $5.6 million compared to $6.1 million in the prior year second quarter, reflecting headwinds and procedure volumes and some reduction in service revenue as hospitals approach replacement cycles.

Gross margin for the second quarter of 2022 was 76% of revenue, with system gross margin of 16% and recurring revenue gross margin of 83%. Operating expenses in the quarter of $9.8 million included $2.7 million in non-cash stock compensation expense, excluding stock compensation expense. Adjusted operating expenses were $7.2 million consistent with the prior year second quarter.

Operating loss and net loss for the second quarter of 2022 were both $5.2 million compared to $3.4 million and $1.2 million in the previous year. Adjusted operating loss and adjusted net loss excluding non-cash stock compensation expense were $2.5 million in the current quarter compared a negative $0.6 million and positive $1.6 million in the prior year quarter.

Negative free cash flow for the second quarter was $1.8 million, compared to $0.1 million in the prior year second quarter and $1.2 million in the second quarter of 2020. At June 30 we had cash and cash equivalents of $35.1 million and no debt.

I will now hand the call back to David.

David Fischel

Thank you, Kim. I wanted to add a few additional comments on two key topics, revenue expectations for the remainder of this year and our balance sheet and financial stability.

On the first topic, we view the revenue reported this quarter as a naviar in our performance. Our pace of system orders and current system backlog of over $12 million supports our prior guidance of system revenue and overall revenue growth for the year.

If we were able to install all the systems in our backlog, we would expect approximately $15 million in system sales for this year. Typically we have discussed an approximate three to 12 months timeline between when a robot in order to when it is shipped or installed for revenue recognition.

We have seen significant variability in these timelines, with various hospital projects delayed long beyond what our customers originally expected. Those hospital construction delays introduced risk that a sufficient portion of backlog may not be recognized this year and instead next year, introducing caution to that guidance.

As all of the orders in our backlog will be delivered eventually, any such delays would generate revenue growth in the coming year. The significant timelines associated with capital purchases and hospital construction reinforce the importance of our strategy to make robotics broadly acceptable by bypassing logistic and construction complexities. As mentioned earlier, based on our current progress, we expect commercial availability of our mobile robot by the middle of next year.

On the Topic of financial stability, we are obviously cognizant of evolving macro concerns and the potential for extended periods of economic and capital market pressure. Our commitment to managing Stereotaxis in a financially prudent and disciplined fashion, served us well in that environment.

Recent inflationary pressures have started to impact various costs for supply, services and transportation. We are working to mitigate cost increase and overall remain confident in our financial position and ability to manage the business with a modest controlled burn as we invest for growth.

While we had higher than normal cash utilization in the first half of this year, much of this was due to increased spending on inventory and one-time cost to establish our new headquarter. We expect continued investment in inventory in the back half of the year, but expect to end the year with approximately $32 million in cash and no debt. I view our normalized operating business as having a proximately $1 million cash burn rate per quarter. That financial prudence combined with our strong balance sheet leaves us in a comfortable position to continue advancing our strategy in a self-sufficient fashion without the need for additional financing.

I recognized the poor optics of our financial results, but view this alongside of confidence that our fundamental progress is significant and position is strong. We have continued demand for our technology. We are advancing a transformative innovation pipeline with multiple impact for launches in 2023 and beyond. We are assembling an all-star commercial team, and we are able to do this while maintaining financial stability and strength.

We look forward to now taking your question. Operator, can you please open the line to Q&A.

Question-and-Answer Session

Operator

Certainly. [Operator Instructions]. And we will now take our first question from Josh Jennings with Cowen. Please go ahead.

Josh Jennings

Hi! Good morning! Thanks for taking the questions David. It was great to see some new system orders come in this quarter and our checks suggest that demand for robotic navigation is building. Just wondering if you could just help us think about the sales funnel, about the replacement channel and the Greenfield channel as we sit here today and relative to earlier in the year.

David Fischel

Hi Josh! Good morning! And so we’ve continued to make progress on the infrastructure for managing a sales pipeline. I think on the last call we talked a little bit more about that new infrastructure that was being built and that has been fairly kind of fully operationalized now in the United States and in the process of being operationalized outside the United States.

Overall the sales funnel looks relatively good. I don't think there's dramatic differences from the type of commentary we gave at the beginning of the year, where we said we had a few dozen, a couple of dozen, more than a couple of dozen systems in that pipeline. But we have more – there is better quality of information now that we have the improved capital sales pipeline infrastructure, and then overall we still see a good range of both replacement cycle and Greenfield systems in that pipeline globally and we’ve been grinding away at them.

I think like you say, the pace of orders that we had is consistent with overall a guidance that would drive growth this year and that will drive growth in future years, but hopefully we can also increase that pace at some point.

Josh Jennings

Thank you. And just thinking about the orders attached to – well, out of the second robotics lab at one of your customers center. I mean our sense is that center is within a big hospital network. And I know we’ve talked about this before, a little bit on the last earning call as well. But just, could you remind us of how you are positioned and your sale, your commercial team is attacking IBM's and any other details you can share about the decision by this EP lab to build out a second robotic lab.

David Fischel

Sure. So you are correct. That was a good guess that it is hospital that has a good historical experience with our system. It’s part of the large idea and across the U.S. and decided based off of that experience to buy a second robot.

And overall I’d say that, obviously the reason why they adopted a second system is because they have experience with both the clinical value of our technology and with the economic value for the hospital of our technology, in allowing them to treat patients that otherwise they wouldn’t be able to treat, and in driving efficiencies across the system. It was very nice earlier this year visiting the hospital and the head of the cardiovascular service line was talking so highly about how, when they review all those data how our system has made complex oblations far more efficient and reduced the variability and timelines of those procedures, and so that kind was really a vote of confidence, not just from the clinician, but also from the administrators there.

From an IDM perspective, we do think that we have sufficient experience in the field, again both from a clinical data perspective and from economic value for a hospital perspective to have meaningful conversations with larger IDNs; with the goal of having a relationship that spreads robotics more broadly across an IDN and proves that there's value and not just when robotics is adopted from the bottom up perspective, but also from the top down perspective.

Those discussions are obviously larger, strategic to sell discussions and so it’s always difficult to know when or how those will evolve. But we definitely have those discussions and think that at some point that makes a lot of sense for an IDN to enter into. So hopefully at some point we’ll be able to update you more.

Josh Jennings

Thanks a lot David. I appreciate it.

David Fischel

Thank you, Josh.

Operator

And the next question comes from Adam Maeder with Piper Sandler. Please go ahead.

Adam Maeder

Hi David! Hi Kim! Good morning and thanks for taking the questions. Maybe just to start, one clarification question on the mobile RMN System. I think I heard you expect launch by both U.S. and OUS, just wanted to clarify that. And then maybe just talk about kind of where are we from a design standpoint. We have designed lock and just level of confidence in hitting these timelines, and then I have a follow-up or two. Thanks.

David Fischel

Sure. Hi Adam! Good morning! And so you came out a little bit during your question. So I think I understood it fully, but in case I'm wrong please correct me. So we talked about given where we are right now in the development process, feeling confident that we should be able to have a launch by this time next year. I’d say that that would be in one of those two major geography, either Europe or the U.S. at least, and so I would assume that both geographies would be relatively soon after each other, so in a couple of months, within a few months of each other, but at least one of those we should have a launch by mid next year.

And overall from a development perspective, again, there's the mechanical aspect, the electrical aspects, the control software, the user interface software, so there's various parts that have to come together. There has been all sorts of challenges along the path, particularly on the supply chain side, on the electronic side and also on the mechanical hardware side. But overall the parts are coming together.

We have not yet started the testing of a system, but we’ve done large amounts of the development and overall feel very good where we stand and that we’ll start the VNV [ph] testing prior to year end and will have – will be able to submit for regulatory approval around that time, and that that will be kind of to the timeline we suggested of a launch by midyear.

Adam Maeder

That's very helpful color David, thank you for that. And then maybe for the next question, just on the MAGiC RF ablation catheter. You know I heard the message on expected timelines for Europe and launch there, but maybe I wanted to ask a question on how far in the U.S. market place – you know just when do you think the U.S. IDE trial can get going? Anything on trial design that you can share at this point in time and ultimately how do we think about potentially U.S. approval for that technology? Thanks.

A - David Fischel

Sure. So the European submission required a huge body of testing, that kind of bench testing, lab related testing and required a whole range – I mean dozens of animal studies and did not require a human study. It will require a post approval study in the EU. In the U.S. we have all of those same requirements and the vast majority of the CE Mark dorcei [ph] will be identical or nearly identical for the U.S. IDE submission and so that’s kind of all set and all ready.

We do have though a few, about a dozen additional animal studies that were requested by the FDA beyond a few dozen studies that we submitted for the European submission and those animal studies require kind of a follow-up period, a relatively short, but still a follow-up period and so we've been building out – in order to run those studies we've been developing our own animal study capability and that has been really, that’s the one gating factor to being able to complete those studies and submit the IDE to the FDA. And so we expect to complete those studies before year end and to be able to submit the IDE kind of immediately upon that.

We had multiple discussions with FDA so far, so we have a fairly clear understanding of what the trial design should be and I would expect around, let’s say 150 or so patient study in one specific clinical indication, one specific type of arrhythmia with a relatively short follow up, a maximum three month follow up.

And so given that it’s a very common arrhythmia, given that we have an installed base of users and the catheter would be able to be used either with the Niobes or the Genesis system, so we can really benefit from our fully installed base there. I think that that’s a trial that should enroll very quickly once we gain the IDE approval and we can actually start the study, both enrollment and follow-up should be done fairly quickly.

Adam Maeder

Okay, great, that's very helpful color, and if you don't mind I'm going to try and sneak one more in. You know I noticed in the – I think it was in the press release, there was some commentary about you know the commercial infrastructure and the progress being made there and kind of laying the foundation and you know historically been very judicious and conservative with spend.

But I also think in the past you talked about the MAGiC catheter launch kind of being impetus for kind of ‘don't go in more on the offensive,’ and you know I know we're not quite there yet, but you know just wondering what are the plans looking ahead for commercial infrastructure and building out the teams. Any additional color you can provide would be much appreciated. Thanks again David.

A - David Fischel

Sure. So yeah, I think you're completely right that we are – we take seriously our commitment to running the company in a financially judicious fashion and I think that kind of – that discipline does create a lot of value for Stereotaxis and it ensures that we don't waste shareholder value and capital, and so – but as now that product ecosystem starts to come together and obviously the catheter, but also the mobile system and the range of technologies that are coming together, it does obviously warrant focusing more on the commercial team and how to ensure that we have an excellent commercial organization. And so with that kind of obviously Europe is going to be a particular area of focus given the launch next year.

Putting in place the right leadership is the first step in that, and as we start the launch and go through the launch, I would expect a fairly substantial build out of our European commercial team. Again, I think that given the step-up in revenue per procedure that the MAGiC catheter provides, that will be a fairly – from a financial perspective for the company, a fairly low risk build out of the team, as you can do that very much hand-in-hand with adoption of the MAGiC catheter at specific accounts, and so you can do a very kind of laser focused, pinpointed hiring, where there's a high ROI for each hire and so I think you'll see us kind of probably over the course of next year doing a fairly substantial build out, perhaps even a doubling of the European team.

Adam Maeder

Thanks for the color David.

A - David Fischel

Thank you.

Operator

And our next question comes from Neil Chatterji from B. Riley. Please go ahead.

Neil Chatterji

Hi! Thanks for taking the questions. Maybe just circling back just on the hospital construction environment, just curious you know that we're you know about a month over into the new quarter. I’m just kind of curious if you’re seeing any signs of improvement here in July and now August, and then also if there’s any way to potentially characterize any barriers to conversion you would see for the systems that are in backlog, you know for example or some tied to larger hospital construction projects.

David Fischel

Sure. So hi Neil! So overall on the commentary we gave today is as of today, so we continue to see delays in hospital construction where even you know a couple of the orders that we received late last year, they are where we would have expected to be installing systems around now. It's still unclear whether we are going to deliver systems in a couple of months, a few months or if it's going to take them longer. So there is kind of a – quite a lot of uncertainty and just kind of delays when it comes to our hospital customers and their own processes in building out labs, in getting themselves organized to be ready for us to install.

So I’d say kind of it still is a fairly messy world out there, and on the side, your second question you kind of asked about uncertainty with conversion, and so as I didn't know if that’s kind of in terms of the timeline of when an order or backlog would be converted into revenue or the risk in terms of the overall, will it convert into revenue. Could you clarify that just?

Neil Chatterji

Yeah, I mean maybe just in relation to the construction environment, so just in terms of is it – are some of these tied to larger construction projects where that’s you know – because it’s a larger project, that’s delaying it even more so than if that was just the EP lab conversion.

David Fischel

Yeah, it's a mix and there are some where you have full build outs of wings or full – you know full blow outs of a floor of a hospital. So those are kind of part of obviously a much larger project and other ones might just be that lab. But then often times what you see is that there might be eight labs in a cardiovascular kind of wing of the hospital and the hospital will just go through lab by lab by lab. So they are doing lab one. When lab one finishes, they are going to do lab two, when lab two finishes they are going to do lab three and we might be lab whatever in that line, and so as you have – any delays you have kind of start to impact like a domino effect, the labs after them and so that’s – usually that is a fairly common scenario.

Neil Chatterji

Got it, got it. Maybe if I can add another question here. So just curious in terms of switching gears to the potential MAGiC launch in Europe, just curious in terms of you know what your expectation is for how quickly sites could switch over to using MAGiC, including any I guess regional nuances you might see there.

A - David Fischel

Sure. So there's a range obviously. I think kind of there will be from up purely logistic and legal ability to launch. Upon CE Mark in certain countries will be able to launch pretty much immediately. We’ll have to have a hospital contract for purchasing the product, but that should happen very, very quickly and we can be prime to enter into those agreements almost immediately upon CE Mark, and so that’s kind of very easy from a logistics perspective.

While in some countries, particularly the Nordic countries and France let’s say, and there are tenders, you know country tenders or regional tenders that you have to enter into. So you can usually sell some amount of an approved product outside of the tender as a new – under new technology clauses or other things, but you can't pursue wholesale adoption, wholesale transition of a site until you go through that tender. And so that logistic aspect will mean that in certain countries, that certain accounts you know there will be, there will be only partial adoption for the first let’s say – until you get over those tenders, which could take let’s say anywhere between six months to a year after the CE Mark process.

But again at other hospitals, let’s say particularly in Netherlands, Belgium, Germany, some of the other countries in Europe, kind of you really have no logistic hurdles once you have CE Mark. And the other kind of real effort that we’ll have to overcome is obviously that some physicians will be motivated and excited to be the first ones in the world to use it and will be very kind of very much pioneering in that effort, while other hospitals, I'm sure other physicians will want to see that one of their peers has first done 10 cases of arrhythmia A or arrhythmia B and kind of has good outcomes and will want to be able to speak with that physician and then based off of that will be comfortable trying it themselves. So that's just kind of the normal variability in physician dynamics.

I think kind of we're working hard on our side as an organization to ensure that there is a thoughtful business plan for every one of our 30 some hospital accounts in Europe, where we are thinking about what are the drivers for adoption, how do we kind of approach that, the individual physicians, how do we approach the hospital as an account, if there are any logistic things, how do we know exactly what application and forms and logistic efforts we have to go through to get into the account, and so I think we're going to kind of be ready. That’s – our role is to be ready, so that as we gain approval we can move as efficiently as possible and as thoughtfully as possible throughout that process.

Neil Chatterji

Great! Thanks for that. I’ll jump back in queue.

A - David Fischel

Thank you.

Operator

And we will now take the next question from Alex Nowak with Craig-Hallum Capital Group. Please go ahead.

Alex Nowak

Great! Good morning everyone. I was hoping to expand on the construction question around hospitals, but maybe speak to the CapEx environment at the hospitals. What is their willingness to go out there and place orders right now, particularly if they are seeing these delays in construction projects? I know some of the peers are seeing a recovery, others are not so much. So just the current state on the CapEx for the cap lab.

A - David Fischel

Sure. Hi Alex! So overall obviously we have been receiving still orders at a relatively regular pace. We still see a pipeline of hospital customers that are interested, and so I'd say that you are right. That when there are construction delays, that does often times lead to delays in us receiving an order, but at some point they need to order and then there might be delays even after that, beyond what they thought were the delays that they were expecting.

And so I – I mean definitely there have been delays of orders because of the construction dynamic at hospitals. But in the end of the day, the world is still running, hospitals are still operating, they still need to upgrade the labs, they still need to build new labs, and so those delays do impact the order scheduled. But orders to get done and then unfortunately sometimes they get done and then you still have delays after that, and then you're waiting kind of on the sideline to be able to deliver and install. But again, that's really a kind of a matter of timeline not a matter of ‘if,’ and so kind of we sit here and do our best given that environment.

Alex Nowak

Yep, understood. And then maybe explain on the real world study of MAGiC in Europe. What is that going to look like, how many patients, follow-up time and is there specific number of selected or this is just going to be basically depending on demand.

David Fischel

And so the post market study in Europe will be – that will be defined more clearly in our discussions with the notified body in Europe over the next few months I assume as we get questions, and so we proposed kind of a study design to them. Overall, again I would not use – that will be kind of across a broader range of arrhythmias, probably kind of in the low hundreds of patients overall. We'll be able to do that across a broad range of our site in Europe and so overall kind of we think that will be a good trial for just building a relatively broad kind of data on the catheter in Europe given that it's kind of post approval. There is kind of obviously somewhat less pressure on that, but obviously it will be important for us to be able to run that trial and to be able to kind of show that there is a value across a broad range of arrhythmias through that trial.

Alex Nowak

Okay, got it. And just lastly, just a clarification. What is the system backlog right now? I think it was $12 million, $1.5 million per system roughly, just going to ace the number.

David Fischel

George, it’s over $12 million of system backlog. It’s a little bit complicated to define it as exactly one system, because you saw, let’s say in the second quarter we reported revenue on kind of half of the system. We have both, the X-Ray component, the robot component. There's also sometimes a large screen display component and so we have some hospitals where there’s a mix of those kind of in play, but in total, yes, it’s kind of – its a mid-high single digit number of systems that come up to that. Again depending on – there is some half systems out there where we’ve shipped one of the parts but not the other part and so it's kind of it's a partial shipment.

Alex Nowak

I see, I understand. Alright, thank you.

Operator

We will now take our next question from Frank Takkinen with Lake Street Capital Markets.

Please go ahead.

Frank Takkinen

Hey David! Thanks for taking my questions. I wanted to ask a little bit more on the mix of replacement versus Greenfield. You provided some color around the funnel and I think what I heard was dozens. Maybe just talk to what the mix of that looks like from a replacement versus Greenfield perspective and then how you expect that to trend on a go forward basis?

David Fischel

Sure, hi Frank. So overall it's a good mix between the two; I think a relatively even mix, and when we look at the late stage pipeline, I think there is more and more – the replacement cycle that we talked about in the past is becoming more and more real. You see that obviously in the results over the last quarter. I expect it also probably in the results in kind of the upcoming quarters. There’s definitely kind of – some of those replacement projects are now taking place and so kind of we are seeing some of those now come through.

With that I’d say that obviously from a fundamental perspective process, the company driving Greenfield adoption is valuable and so we're putting kind of more and more focused there, and we still have a range of Greenfield hospitals in the pipeline. And so I think kind of you’ll see a mix, but I’d say that at least in the very late stage pipeline it’s probably skewed more towards the replacement side.

Frank Takkinen

Okay, that's helpful. And then maybe just an update on utilization. I know some of the newer sites have been trending above some of the legacy sites. So maybe just any color you can provide about utilization in the quarter and how that's been trending.

David Fischel

Yes. So if you exclude the dynamic of kind of Asia last quarter, overall the utilization has been – I don't have the exact numbers again for the Genesis installs or the new Greenfield installs that we just had from the beginning of this year. But overall the utilization has remained very nice and kind of above average levels in the second quarter, and so kind of we're very happy with the way the Greenfield sites and Genesis System are being used.

I don't if anyone on the call had an opportunity. The new launch that we had in Warsaw in the first quarter, late in the second quarter they had a conference that they hosted at the hospital. Two of the live, two I think out of the four live cases from that conference were using our technology. They had a commentary at that conference where they commented how impressed they were with the system, how they were using it across the broad range for arrhythmias. And so overall kind of I’d say that the experience at our existing sites, as the new launches at the Genesis sites has been overall very nice.

And so that’s kind of – and outside of that I’d say overall utilization remains kind of relatively stable. We have kind of sometimes pressures like the second quarter in Asia Pacific, but overall have a relatively sticky recurring revenue base and so that kind of has been, that’s been obviously a bright spike in along the stuff, kind of an overall stable foundation for the business upon which to build upon.

Frank Takkinen

Okay, that's helpful. I'll stop there. Thanks for taking my questions.

David Fischel

Thanks Frank.

Operator

And we will now take the next question from Nathan Weinstein with Aegis Capital. Please go ahead.

Nathan Weinstein

Hi David! Good morning and thanks for taking my questions. These questions are about the innovation pillars. Basically, can you remind us from your perspective what you see as the top endovascular adjacencies that could be most attractive for Stereotaxis? And then secondly, any update on the China specific product ecosystem and does that remain an attractive opportunity as you see it?

David Fischel

Hi Nathan! Good morning. Sure, so let me touch upon both of those. So from an endovascular intervention and kind of as an adjacency to what we are doing in electrophysiology, that’s kind of obviously one of the big pillars of our growth. I think our technology, the Robotic Magnetic Navigation, the concept of moving endovascular devices from their distal tip and by doing so allowing for precision and safety and reach and stability that otherwise is not possible with a manual catheter. I think that's kind of – it has inherently a lot of advantages across a range of endovascular surgery.

At our kind of R&D, Innovation Day that we hosted late last year, we talked about five specific clinical applications where there is kind of challenge and unmet medical need that we think can be addressed very nicely with robotics with our technology, and so we are kind of building the ecosystem of interventional devices, guide wires, guide catheters that can be used across those clinical applications.

I think places like neurointervention, where you have particularly complex anatomy, particularly delicate anatomy, there is significant unmet medical need with many patients not getting therapy at all or not getting the therapy that would be beneficial to them. Those that are particularly kind of attractive areas that I think we can provide a lot of value in.

But again, they were kind of all five of the critical areas where we kind of have our site on. The others outside of neurointervention being interventional cardiology, peripheral arterial disease, AAA grafts and embolization for cancer. And so I think kind of those five are where we currently have our sites on, and as bring out the right tool set to address them, I think you will kind of hopefully see in the first year or so, a range of clinical literature addressing kind of multiple different clinical specialties.

On the China side, obviously the disruptions and quarantines defined in the second quarter were not very beneficial to overall progress, but it was very impressive. We are continuing to work with Michael Tropea even while many of them were in quarantine and advancing the ranger of this kind of product ecosystem that we are developing together in China.

Again, the product ecosystem includes obviously regulatory approvals for our robot and X-Ray system, and mapping integration with Microbot's mapping system, and then a range of ablation and potential diagnostic catheters also in China, bringing the MAGiC catheter there, development of several Microbot catheters, and so there is kind of lot going on in that collaboration.

And overall kind of we’re very happy and very pleased to see the way that that collaboration is working well together. The way we're developing a range of - advancing a range of the technologies together there and so I think kind of that right ecosystem coming together. Like I said in the prepared remarks, it should be available in the second half of next year.

Probably kind of different aspects of that ecosystem will come in and be available at different times. But that kind of coming all together, I think kind of as we start to get towards the back half of next year, you'll start to see kind of that coming into play and that's really when you can start to benefit from the substantial commercial organization of Microbot in really driving kind of a broader adoption across again a fairly large sales team.

Nathan Weinstein

Great! Thank you, David.

Operator

Our next question comes from Javier Fonseca with Spartan Capital. Please go ahead.

Javier Fonseca

Hello! Thanks for taking my question. I have a quick question on the front for capital sales. So with the underline macro challenges that we've seen so far in 2022, how does overall commercial strategy for the new system installed changed. I know the sign is still there, you know still a good system, there is still demand. But you know in the face of all these challenges, you know has there been any changes to overall strategy.

David Fischel

Hi Javier! So no, I don't think there is any real change to the overarching strategy. As the overarching strategy is obviously we have a technology which provides a lot of value. We have still very small market share, less than 1% market share of just electrophysiology market.

I think the clinical value and health care system value that we provide is its being a substantially higher market share in electrophysiology and so we are doing the right things on the commercial side to gain a fair share of that market. And then in tandem obviously, doing the strategic innovations that allow us to kind of gain adoption much more easily in the current product set up, which requires construction as it provides us with our own proprietary disposable, giving us the ability to build sales teams in a different fashion, in a much more substantial fashion and building out the technology ecosystem, so that we can be used across multiple clinical applications and not just electrophysiology. So I think that strategy is very sound and we are continuing to advance that.

Javier Fonseca

Okay, thank you very much. No follow-up question.

David Fischel

Thank you, Javier.

Operator

And we have a follow-up question from Josh Jennings with Cowen. Please go ahead.

Josh Jennings

Alright, David. Thanks for taking the follow-up, David and Kim, sorry.

I wanted to just ask about the neurovascular indication. I mean since the Innovation Day in December, it’s been a number of months. I’m sure you’ve interacted with some neurovascular ventures, some neurosurgeons and I wanted to just hear from you what type of feedback you’ve gotten. Are there any specifics on clinicians to use on the clinical value proposition and then your teams internal I guess optimism level. Sure it’s increased over the seven months, but if you could share that, that would be great. And I think you gave some – provided part of the answer already in one of the previously question. I appreciate the follow up.

David Fischel

Sure, so actually in the second quarter we hosted two neurosurgeons from two different hospitals and who came to St. Louis and were working with us, with the devices we developed, with [inaudible].

We’ll have probably a few more visiting us late in the third or fourth quarter, early fourth quarter and so we've kind of had – we've been fortunate to benefit from kind of a fairly passionate group of prestigious and neurosurgeons who have been helping us in that development, and overall I think that the kind of the clinical value of being able to navigate torturous vasculature is significant in neurointervention, whether you look at us from vasectomy cases, kind of aspirational cases for ischemic stroke or you look at coiling cases for hemorrhagic stroke.

There are a large range of patients who do not get therapy at all, or who – when the physician is trying to reach the site that needs therapy, could struggled for at 20, 40, 60, 80, 100 minutes trying to just get through the tortuous vasculature, and that'll begin stroke at times in its brain, and so there's a lot of clinical value to be had if you can Improve the efficiency of reaching the target site and you can do so in a safe fashion.

And so I think that's really what motivates those physicians so that they see that with our tools they can get places that otherwise they wouldn’t be able to get or they can get to places much more efficiently, much more quickly, without using the whole range of tools. And so that is kind of really where the value proposition is and I think kind of as we get those tools to market, that will allow us to start to obviously prove it in the clinical literature.

Josh Jennings

Appreciate it. Thank you.

Operator

And we have no further questions for today's call. So I would like to turn the call back to David Fischel for any additional or closing remarks.

David Fischel

Okay, thank you very much everyone for your questions and for your continued support. We look forward to working hard on your behalf in the coming months and speaking again next quarter. Thank you very much.

Operator

This concludes today's call. Thank you for your participation. You may now disconnect.

Tue, 09 Aug 2022 07:39:00 -0500 en text/html https://seekingalpha.com/article/4532144-stereotaxis-inc-stxs-ceo-david-fischel-on-q2-2022-results-earnings-call-transcript
000-676 exam dump and training guide direct download
Training Exams List