killexams.com provides the most current and 2022 current Pass4sure API-580 dump with real questions intended for new subject areas regarding API API-580 Exam. Practice the API-580 real questions and Answers to be able to Improve your comprehension and pass your current test with Substantial Marks. We assurance your pass in the Test Centre, covering all the subject areas of API-580 ensure of which you improve your current Knowledge of typically the API-580 exam. Pass with these actual API-580 questions.
Exam Code: API-580 Practice exam 2023 by Killexams.com team API-580 Risk Based Inspection Professional Exam Detail:
The API-580 Risk Based Inspection Professional certification exam is designed to assess the knowledge and expertise of professionals involved in the field of risk-based inspection in the oil and gas industry. Here are the exam details for API-580:
- Number of Questions: The exam consists of 70 multiple-choice questions.
- Time Limit: The time allocated to complete the exam is 3 hours.
Course Outline:
The API-580 certification is based on a comprehensive body of knowledge that covers various aspects of risk-based inspection. The course outline generally includes the following areas:
1. Introduction to Risk-Based Inspection (RBI):
- Overview of RBI concepts and principles.
- RBI methodologies and approaches.
- Regulatory and industry standards related to RBI.
2. Risk Assessment:
- Identification of risk sources and factors.
- Quantitative and qualitative risk assessment techniques.
- Risk ranking and prioritization.
- Risk mitigation strategies.
3. RBI Process:
- Inspection planning and scheduling.
- Data collection and analysis.
- Probability of failure determination.
- Consequence of failure assessment.
- Inspection intervals and frequencies.
5. RBI Implementation and Management:
- RBI implementation strategies and considerations.
- RBI documentation and reporting.
- Maintenance and updating of RBI programs.
- Communication and coordination with stakeholders.
Exam Objectives:
The objectives of the API-580 exam are as follows:
- Evaluating candidates' understanding of RBI concepts, methodologies, and approaches.
- Testing candidates' knowledge of risk assessment techniques and risk ranking.
- Assessing candidates' proficiency in the RBI process, including inspection planning and scheduling.
- Evaluating candidates' familiarity with various inspection techniques and methods.
- Assessing candidates' understanding of RBI implementation, management, and communication aspects.
Exam Syllabus:
The specific exam syllabus for the API-580 exam covers a wide range of Topics related to risk-based inspection. The syllabus includes:
1. Introduction to RBI:
- Definition and principles of RBI.
- Regulatory and industry standards related to RBI.
- RBI methodologies and approaches.
2. Risk Assessment:
- Identification of risk sources and factors.
- Quantitative and qualitative risk assessment techniques.
- Risk ranking and prioritization.
3. RBI Process:
- Inspection planning and scheduling.
- Data collection and analysis.
- Probability of failure determination.
- Consequence of failure assessment.
- Inspection intervals and frequencies.
5. RBI Implementation and Management:
- RBI implementation strategies and considerations.
- RBI documentation and reporting.
- Maintenance and updating of RBI programs.
- Communication and coordination with stakeholders. Risk Based Inspection Professional API Professional reality Killexams : API Professional reality - BingNews
https://killexams.com/pass4sure/exam-detail/API-580
Search resultsKillexams : API Professional reality - BingNews
https://killexams.com/pass4sure/exam-detail/API-580
https://killexams.com/exam_list/APIKillexams : Plunk launches AI-powered home analysis tool
Plunk, an AI-powered home analytics platform, has introduced a new tool called Plunk Pro that aims to transform the real estate market by offering real-time insights into home valuation, risk assessment, and remodeling possibilities.
The company’s new offering, Plunk Pro, provides real estate investors, advisors, and analysts with access to over 104 million homes nationwide. Users can receive real-time valuation data, predictive investment analysis, and thorough risk assessment.
In a market where stockbrokers and investors have been relying on real-time data for years and dealing with trades averaging $10,000, real estate has lagged behind, with deals typically worth much more. According to Ian Brillembourg, Plunk’s Head of Mobile Product, “The average sales price of a home in the US was $495,100 as of Q2 2023 — yet until now, there was no way for real estate brokers and investors to have access to real-time property valuation data and analysis.”
Plunk Pro, a web- and mobile-based application, will offer functionalities such as immediate home value determination, refined value adjustment, neighborhood comparisons, real-time market insights, remodel value estimation, and project recommendations for improving home value. These features are part of Plunk’s proprietary Dynamic Valuation Model, designed to Improve accuracy and facilitate informed investment decisions.
Co-founder and CEO of Plunk, Brian Lent, stated that the company’s mission is to “unlock confident investing in the largest asset class in the world,” through the innovative use of artificial intelligence, deep learning, and other advanced technologies.
Apart from individual users and small teams, Plunk’s AI-powered home analytics will also be available for enterprise customers via API. The move promises to revolutionize how real estate professionals approach property investments by providing a clear and comprehensive understanding of various factors that influence home value.
With Plunk Pro, investors and real estate professionals now have access to an advanced platform that takes the guesswork out of property valuation and investment decisions, offering tools that could reshape the industry.
This content was generated using AI and was edited by HousingWire’s editors.
Tue, 08 Aug 2023 02:21:00 -0500HousingWire Newsroomen-UStext/htmlhttps://www.housingwire.com/articles/plunk-launches-ai-powered-home-analysis-tool/Killexams : Big Dawgs In Automated Trucking Make Big Moves Towards Commercialization
Torc Robotics, a subsidiary of Daimler Trucks, is working towards a 2027 commercialization date.
Torc Robotics
Introduction dates of commercial driverless trucks are increasingly coming into focus. Daimler Trucks has – for the first time – set a date for commercialization of their self-driving trucks. Meanwhile, Aurora has provided a fresh data dump of their plans to put driverless trucks on the road next year, as well as updated metrics. Both companies have provided business projections. Waymo is taking a contrary approach.
In the automated driving world, vehicle manufacturers move on different timescales than startups. They don't have to be in a hurry because they’re selling huge quantities of cars and trucks every year. Compare this to the startup world, where venture investors are looking to see an investment return, creating a “get to market really fast” mentality. In between are the publicly traded companies with similar pressures, but the investor base is much broader (and very fickle). Publicly traded Aurora and venture-funded Kodiak Robotics both aim to release their first fully driverless truck to customers in 2024. As a perfect example of these dynamics, Daimler’s trucks will come a few years later.
There’s plenty to unpack, let’s get at it.
Daimler’s Boston Truck Party
Daimler Trucks spun off from Mercedes-Benz about a year ago. The first-year anniversary was commemorated by a capital markets day in Boston last month, where CEO Martin Daum covered a wealth of issues, including EV, AV, recurring revenue streams, and their ESG approach. Not surprisingly, his overall outlook was optimistic. This Daimler deck from the event offers fascinating insights.
Daum announced that their autonomous trucking product could come “as soon as” 2027. Previously, neither Daimler, nor their subsidiary Torc Robotics, had stated a specific launch date, only referring “the last half of this decade.”
Here's what’s cool: not only did Daum announce their planned commercialization date for automated trucks, he laid out a four-phase deployment approach encompassing where they will deploy and in what sequence. Phase 1 will encompass California, Arizona, New Mexico, and Texas on interstate highways I-10 and I-40. The I-40 route will continue into Arkansas and on to Memphis. Oklahoma City to St Louis on I-44 is also included. Phases 2 and 3 will add more Texas routes, as well as continuing eastward from Memphis on I40, connecting to I-95 on the Eastern seaboard. I-10 plays prominently coast to coast, and in Florida there’s a tie-in with I-75 to reach Atlanta. I-95 and the major interstate highways nationally come online in Phase 4.
As you can see from the image below, next to the four-phase map is a bar chart indicating mileage added each year from 2027 to 2030. This is just conceptual, as the vertical axis is unlabeled. There’s no explicit statement that the map phases match with these four years, but with the two charts side by side that’s my hunch.
Daum also reiterated Daimler’s “open” go-to-market plan, in that they will sell customers their homegrown robotic driving system via Torc Robotics, at the same time supplying vehicles that customers can equip with autonomous driving technology from other vendors. The latter is very important for the broader automated driving industry, as this stimulates competition across the ADS space.
Daum offered an interesting projection on the business side too. He said that in 2030, the company expects to generate over 3 billion euros ($3.30 billion) in revenue and 1 billion euro earnings (before interest and taxes) from automated truck operations.
What do Daimler’s revenue numbers mean in the real world? To answer this question, I turned to my go-to trucking expert Lee White, who spent decades at UPS and after retirement worked for TuSimple as VP of Strategy until late last year. At UPS he was very involved in their evaluations of automated driving tech and setting a path forward for eventual adoption.
I asked Lee, who now runs LM White Consulting, about the Daimler numbers. He came back to me immediately (I think he did the math in his head): “Breaking down the $3B Euros into trucking math, it would mean more than 6,000 trucks driving 1,000 miles a day.” He assumed $2 per mile with the trucks running five days per week.
In North America, Daimler Trucks sells around 70,000 Class 8 trucks per year via their Freightliner brand. Using White’s number of 6000 trucks, this works out to 1500 per year.
White sees the information shared by Daimler as very encouraging to the AV trucking industry. “We have the leading OEM communicating definitive plans on when they will enter the market, as well as the expected revenue stream. It gives us some commercial handles we can hold on to for the future of AV trucking. That is a very bright north star for the industry!” He also noted that Daimler did not communicate the business model or details of how they will generate that $3B Euros. At some point I expect we’ll be hearing about this, but not anytime soon.
One of the strongest plays in automated trucking is Torc Robotics, 51% of which is owned by Daimler Trucks. I recently had an opportunity to experience the Torc automated driving system on freeways near their operations center in Albuquerque. From knowing the team for some time and having many discussions over the years, I expected their truck would run well on the highway. I was not disappointed. The vehicle operated as well or better than an alert professional truck driver, handling lane changes, merging traffic, and freeway-to-freeway interchanges. Their in-vehicle monitor showed vehicle data, relative lane position, and the presence of nearby vehicles, much like one sees in a robotaxi ride. Their “lane bias”technique was particularly effective, moving laterally a few feet to create more space between the host truck and another vehicle passing or merging. This is illustrated in the image below. As another truck was starting to pass the Torc truck on the left, the Torc system checked lane occupancy on the right. See that the right lane had no traffic, the vehicle moved towards the right-side of its lane, creating more space between the passing truck and the Torc truck. The image shows the latter stage of the passing maneuver, at about the time the Torc vehicle would move back to the center of the lane, i.e. the default position.
If you’ve ever driven a large vehicle (like the RV I used to have) on the highway while passing a truck, you’ll know that there can be squirrelly wind and air pressure between these two large bodies with significant air passing between them. Controlling the vehicle can be a bit challenging. This is why the Torc vehicle creates space when possible, as just one more aspect of reducing risk.
The Torc Robotics “lane bias” feature is one of their key on-road safety measures for automated ... [+]driving.
Torc Robotics
Torc also demonstrated their capability to run on surface streets near the freeway. More on this in an upcoming article.
Aurora Raises the Stakes, Death Vigil Postponed?
Aurora, another leader in the automated trucking space, provided their 2Q23 report last month. It was upbeat yet they backed up their assertions with data and metrics which I’ll discuss here, along with some industry context.
Aurora said they are now logging over 17,000 commercial miles per week. So far this year they have delivered 2,290 loads for customers including FedEx, Werner, Schneider, and Uber Freight, driving more than 630,000 commercial miles with nearly 100% on-time performance. The Aurora driverless-intent vehicles are being operated under the supervision of vehicle operators.
With all this activity and promising performance indicators, one might think all is rosy. But the doomsayers have long been holding a vigil for Aurora’s demise, due to the company burning through cash at a fast pace. “Aurora can’t survive at this spending pace and will run out of money long before they generate any substantial revenue!,” has been the typical refrain. My latest articles have pushed back on this view, simply because I couldn't imagine Aurora’s Board and leadership blithely trundling along until the bank account hits zero. My position has been: “I don't know what the plan is, but they’ve got to have a plan. It’s just not public.”
Aurora’s burn rate has been going in the right direction. The company said it narrowed its net loss for the 2Q23 to $218 million compared with a loss of $1.1 billion in the same quarter last year. “Still much too high,” my colleagues would say.
But then, in July, Aurora unveiled their plan, raising $853 million of total gross proceeds through a public offering, which included “very strong support from key institutional and strategic investors” according to their press release. Aurora said their beefed-up total liquidity is now $1.6B “which will fund us through our planned Commercial Launch and into the second half of 2025.”
Problem solved? Not yet. Given their cost structure, revenues by mid-2025 may be substantial but far short of profitability. Once they have a mature self-driving product, they may reduce staff to lower the cash burn. Still, another funding infusion will almost certainly be needed in a couple of years.
But there’s more! In researching this article, I discovered a June 30 SEC 8-K filing on June 30 in which the company estimated they will need “$1.6 billion to $1.7 billion in incremental capital beyond its cash, cash equivalents and short term investments as of June 30, 2023 to become free cash flow positive on a run-rate basis by the end of 2027.” The $885M raised in July was roughly half this amount. Therefore, they “have a plan” to again raise funds at some point in the future. The ability for Aurora or any other company to raise funds in a few years cannot be known now. This gives the death-vigil-keepers an excuse to proclaim eventual doom if they want, but I suggest they shift their dooming to other more-likely players.
In Lee White’s view, “the biggest issue for Aurora will be the speed and path to commercialization in order to get to cash flow positive and offset the over $200M per quarter spend. This won't be done by 20 trucks running between Dallas and Houston or any sort of in-house truck build out. Aurora will need more than 1500 trucks running with the Aurora Driver 1,000 miles per day to create revenue offsetting the ongoing staffing and R&D expenses at Aurora: that is the challenge in the next 24 months.” He also pointed to a key item not in Aurora’s control: sourcing 1500 driverless-ready tractors from their manufacturing partners, which require specialized levels of redundancy and other fail-safe features.
Aurora has strong partnerships with PACCAR and Volvo, but the pace at which these companies will produce driverless-ready trucks, and the number that will be allocated to Aurora, has not been discussed publicly. Aurora’s 2Q23 report discussed latest progress in this regard. During 2Q23 PACCAR completed a 1.5 million equivalent mile durability test of a Kenworth cab with the Aurora Driver hardware installed. The company said the Aurora Driver hardware remained fully functional at the end of the test. Regarding Volvo Autonomous Solutions, Aurora expects to begin testing a prototype Volvo VNL, powered by the Aurora Driver, in the first quarter of 2024.
The Aurora Scoreboard
Meanwhile, Aurora has released a new software version (Aurora Horizon Beta 7.0) and continues to add substance to their go-to-market narrative. Their premise goes like this: “The Aurora Driver will be ready to launch when we have a closed Safety Case for our Dallas to Houston lane. Our Safety Case is a comprehensive, evidence-based approach to confirming our self-driving vehicles are acceptably safe to operate on public roads. It goes beyond just ensuring the vehicle drives well enough for a demo; rather, it demonstrates that our product, and our company, are holistically and sustainably safe.”
Key to this process is the Aurora’s Autonomy Readiness Measure (ARM), which they explain as “a weighted measure of completeness across all claims of our Safety Case for the launch lane, which reflects the percentage of work needed to move from Feature Complete to our next critical milestone — Aurora Driver Ready.” The ARM is the progress tracker, which reached 65% at the end of the second quarter, a 21-point increase from the previous quarter. Aurora continues to be very bullish, stating that “we are on track to achieve Aurora Driver Ready later this year” and “we continue to expect to be contracted for commercial launch by the end of the year.”
While the ARM is a safety metric, Aurora’s Autonomy Performance Indicator (API) measures the operational performance of the Aurora Horizon service in real-world conditions. “This metric allows us to track not just the state of our technology, but the maturity of our processes and procedures in operating our business,” Aurora says. In particular, any need for on-site support will severely erode profitability – imagine the cost of dozens of support vehicles on standby along the route, complete with dispatchers and highly trained technicians. So, the indicator penalizes the use of on-site support. The API also considers autonomous miles driven which require remote input from the Aurora Services Platform. Aurora says that for 2Q23, the API was 97%, a slight increase from the previous quarter.
Compared to performance in the first quarter, about half of the 2Q loads had an API of 100% and nearly 75% of the loads had an API greater than or equal to 99%.
What about the conditions that required support? Aurora emphasized that none of the support their vehicles received was required to keep the vehicles operating safely: “In over 100,000 commercially-representative miles driven on the launch lane in 2023, including over 65,000 commercial miles during the second quarter, we experienced zero safety-critical interventions.”
Again, I turned to Lee White for a trucking professional’s perspective. What does this 97% API level mean to a trucking fleet? He said, “When I receive these updates, I evaluate them on a path to deployment. Translating Aurora’s 97% Autonomy Performance Indicator of 97% to trucking reality, this means one out of every 33 loads would have needed some type of ‘assistance’ - and that could be as basic as the truck would have called the mission oversight center for help, or perhaps Aurora would have sent out a recovery driver if there was deeper intervention required. On-Road assistance issues for trucking fleets results in potential service failures for customers and a much higher cost for the maintenance issue. The trucking fleets have invested in their maintenance programs to 'de-risk’ on-road breakdowns as much as possible. For example, OEMs and fleets use preventive maintenance schedules and advanced telematics information to predict and schedule maintenance to be done in their shops, to minimize the chance of a breakdown on road. The Best-in-Class fleets are at one breakdown per 1,000 truck days on road – so they have achieved success in limiting the on-road exposure both in service and in cost.” Lee provides a very sobering reference point: how long will it take for Aurora, or any other driverless truck provider, to achieve this level of performance?
Is Waymo Via Out For Good?
While Daimler Trucks and Aurora have been revving their engines, Waymo is downshifting. Waymo has been more in the news regarding its robotaxi operations, but their long-running truck AV program, called Waymo Via, has resulted in a very strong pre-commercial offering. However, early this year Waymo greatly reduced operations of their driverless trucking commercialization activities. In July, they put the whole thing on ice, repurposing many of their trucking staff to robotaxi operations. As Waymo co-CEOs Tekedra Mawakana and Dmitri Dolgov noted in a blog post, “With our decision to focus on ride-hailing, we’ll push back the timeline on our commercial and operational efforts on trucking, as well as most of our technical development on that business unit.”
Interestingly, though, in July both Waymo and Daimler Trucks re-affirmed that their long-running partnership to develop a driverless-ready tractor will continue.
Did Waymo get it backwards? Back to Lee White, my truck business sensei, for more truck math: “I am disappointed with Waymo Via's announcement. I am very impressed with what Waymo and Cruise are doing with the passenger cars from a commercialization standpoint – they are adding cities and demonstrating that AV vehicles can be deployed safely at scale! My disappointment is on the economics; a 3% penetration into the trucking market is worth more than a 50% capture of the robotaxi total addressable market. That 3% penetration can be achieved in the lanes where Waymo Via and the other AV trucking companies are currently testing. From my viewpoint, Waymo Via has the potential to provide a faster ROI on trucking lanes than the passenger side. And Waymo Via already has the OEM secured with Daimler, which is critical to scaling. Let's hope this is a short pause, with Waymo’s return powered by the economics of AV trucking.”
Why would Waymo step away from such a straightforward revenue play on the trucking side?
I can (facetiously) imagine the Alphabet Board saying to Waymo, “You’ve had years of fun with your people-moving sandbox and your freight-moving sandbox. Times have changed and now you have to pick one. Which will it be?”
It’s important to remember that at a fundamental level, Waymo was founded to “change the world” in personal mobility. After an early exploratory phase examining highway driving and personal vehicles, they zeroed in on robotaxis, which have immense potential to revolutionize the way people travel. This has always been their Big Audacious Dream and now they’re of one mind with a single offering. For Waymo to step away from robotaxi and retain trucking could have sent shockwaves through their investor base and cause a revolt from the tech team.
Indeed, Waymo has a very strong offering in the driverless trucking space. If they hold on to this and decide to re-energize it in a couple of years, Waymo Via will be a major player (and could even come to market sooner than Daimler/Torc). For now, there are some strong synergies at play in their new approach. I have long said that robotaxis must be able to travel on freeways as well as surface streets to fully scale up and generate the revenues they need. The Waymo Via team has been living and breathing highways for years. I’m confident that the highway-level autonomy smarts from their former truck developers are quickly being applied to a “Gen2” Waymo robotaxi that can travel on all the types of roads that we are able do on a daily basis. Waymo has already noted in latest press they are now testing on highways in preparation for commercial launch beyond streets only.
Where Are We Now?
Along with their deployment partners, we’re now down to five major players developing commercial autonomous trucks operating long-haul on highways: Aurora, Kodiak, Plus, Torc, and Waabi.
Lee White and I are both upbeat about latest developments. In particular, he notes, “This is fantastic news that Aurora completed this latest fundraising in this challenging capital market. It is a very positive sign for the entire AV Trucking industry.” I would add that, though they would never admit it, Aurora in particular is probably very happy to see their major competitor, Waymo, exit the stage.
I can’t stress enough the critical role of the truck manufacturers. We are now in a world where four major truck OEMs are deep into the platform development for truck autonomy: Daimler, Iveco, PACCAR, and Volvo. Traton is the big question mark. They were a significant player when their partnership with TuSimple was underway from late 2020 to the end of last year. Traton dissolved the partnership due to safety mishaps and management turmoil at TuSimple. They developed significant autonomy expertise on the manufacturer side of this relationship, and I surmise they have been industriously looking for a new partner during 2023. If/when an announcement is made, it could be one of the most significant developments of the year.
In September 2014, the Mercedes Benz truck division enthralled bleachers-full of industry luminaries and journalists at a test track in Germany by introducing their Future Truck (FT). Automated Class 8 trucks were zooming around the loop at full speed, pulling trailers emblazoned with “FT 2025” (see picture below.) I highly recommend the accompanying video, which presented the concept of the driver turning over control to the autonomy and then swiveling around to do other work on his computer. While the press release did not state that the company had “set a date” for introduction, the 2025 timing was a subtle message of their ambitions. I suspect that the management and engineers of Mercedes had no clear idea at the time how they would proceed to achieve full commercial autonomy by 2025, but a vision was set. Mercedes, now Daimler Trucks, has been continuously working toward the now announced 2027 date ever since then, with forays into truck platooning, developing increasingly sophisticated Advanced Driver Assistance Systems, and eventually acquiring Torc Robotics and ramping up significant internal resources to now have a definitive launch date.
Mercedes "Future Truck" presented in 2014, indicating their aspirations for 2025.
Daimler AG Global Communications
Yeah, Daimler is late by two years. At least according to the brave announcement of yesteryear. While they may not be first to market, Daimler’s entry will be a definitive marker. If all happens as planned for the companies discussed here, the real-deal launch of driverless trucking – at scale – is not far away.
Disclosure: I am a strategic advisor and/or hold equity in companies named in this article, including Aurora, Daimler Trucks, and Plus.
Mon, 21 Aug 2023 04:17:00 -0500Richard Bishopentext/htmlhttps://www.forbes.com/sites/richardbishop1/2023/08/21/big-dawgs-in-automated-trucking-make-big-moves-towards-commercialization/Killexams : API LibraryNo result found, try new keyword!These Bloomberg API libraries cannot be used by Bloomberg Professional terminal users (which use the Desktop API). They are only compatible with the Bloomberg Server API and B-Pipe data feed products.Fri, 02 Sep 2022 00:51:00 -0500entext/htmlhttps://www.bloomberg.com/tosv2.html?vid=&uuid=42a29477-425d-11ee-9010-457a62754769&url=L3Byb2Zlc3Npb25hbC9zdXBwb3J0L2FwaS1saWJyYXJ5Lw==Killexams : Legal Still Has a Lot to Learn—And Navigate—In the Gen AI EraKillexams : Legal Still Has a Lot to Learn—And Navigate—In the Gen AI Era | Legaltech News
After months of enthusiasm surrounding generative artificial intelligence, misunderstandings about the technology linger among attorneys and judges alike.
August 15, 2023 at 06:32 PM
5 minute read
X
Thank you for sharing!
Your article was successfully shared with the contacts you provided.
It’s been almost a year since OpenAI released its generative artificial intelligence-powered chatbot ChatGPT, spurring new generative AI legal tech products, new legal questions and worries about the technology’s impact on the job market.
During the “Generative AI in Legal: Investigating the Rise of Intelligent Counsel” webinar on Tuesday hosted by EDRM, panelists explored the legal industry’s current relationship with generative AI and how some of its lingering misconceptions may shape future adoption levels, judge orders and trust in APIs.
Where Gen AI and Legal Meet
Want to continue reading? Become an ALM Digital Reader for Free!
Benefits of a Digital Membership
Free access to 1 article* every 30 days
Access to the entire ALM network of websites
Unlimited access to the ALM suite of newsletters
Build custom alerts on any search subject of your choosing
With this subscription you will receive unlimited access to high quality, online, on-demand premium content from well-respected faculty in the legal industry. This is perfect for attorneys licensed in multiple jurisdictions or for attorneys that have fulfilled their CLE requirement but need to access resourceful information for their practice areas. View Now
Our Team Account subscription service is for legal teams of four or more attorneys. Each attorney is granted unlimited access to high quality, on-demand premium content from well-respected faculty in the legal industry along with administrative access to easily manage CLE for the entire team. View Now
Gain access to some of the most knowledgeable and experienced attorneys with our 2 bundle options! Our Compliance bundles are curated by CLE Counselors and include current legal Topics and challenges within the industry. Our second option allows you to build your bundle and strategically select the content that pertains to your needs. Both options are priced the same. View Now
Law.com Compass includes access to our exclusive industry reports, combining the unmatched expertise of our analyst team with ALM’s deep bench of proprietary information to provide insights that can’t be found anywhere else.
Law.com Compass delivers you the full scope of information, from the rankings of the Am Law 200 and NLJ 500 to intricate details and comparisons of firms’ financials, staffing, clients, news and events.
Don't miss the crucial news and insights you need to make informed legal decisions. Join Legaltech News now!
Unlimited access to Legaltech News
Access to additional free ALM publications
1 free article* across the ALM subscription network every 30 days
Exclusive discounts on ALM events and publications
Tue, 15 Aug 2023 10:32:00 -0500entext/htmlhttps://www.law.com/legaltechnews/2023/08/15/legal-still-has-a-lot-to-learn-and-navigate-in-the-gen-ai-era/?slreturn=20230724050441Killexams : Black Hat 2023 Keynote: Navigating Generative AI in Today’s Cybersecurity LandscapeAzaria Labs CEO and founder Maria Markstedter speaks at Black Hat 2023 in Las Vegas on Aug. 10, 2023. Image: Karl Greenberg/TechRepublic
At Black Hat 2023, Maria Markstedter, CEO and founder of Azeria Labs, led a keynote on the future of generative AI, the skills needed from the security community in the coming years, and how malicious actors can break into AI-based applications today.
Jump to:
The generative AI age marks a new technological boom
Both Markstedter and Jeff Moss, hacker and founder of Black Hat, approached the subject with cautious optimism rooted in the technological upheavals of the past. Moss noted that generative AI is essentially performing sophisticated prediction.
“It’s forcing us for economic reasons to take all of our problems and turn them into prediction problems,” Moss said. “The more you can turn your IT problems into prediction problems, the sooner you’ll get a benefit from AI, right? So start thinking of everything you do as a prediction issue.”
He also briefly touched on intellectual property concerns, in which artists or photographers may be able to sue companies that scrape training data from original work. Authentic information might become a commodity, Moss said. He imagines a future in which each person holds ” … our own boutique set of authentic, or should I say uncorrupted, data … ” that the individual can control and possibly sell, which has value because it’s authentic and AI-free.
Unlike in the time of the software boom when the internet first became public, Moss said, regulators are now moving quickly to make structured rules for AI.
“We’ve never really seen governments get ahead of things,” he said. “And so this means, unlike the previous era, we have a chance to participate in the rule-making.”
Many of today’s government regulation efforts around AI are in early stages, such as the blueprint for the U.S. AI Bill of Rights from the Office of Science and Technology.
The massive organizations behind the generative AI arms race, especially Microsoft, are moving so fast that the security community is hurrying to keep up, said Markstedter. She compared the generative AI boom to the early days of the iPhone, when security wasn’t built-in, and the jailbreaking community kept Apple busy gradually coming up with more ways to stop hackers.
“This sparked a wave of security,” Markstedter said, and businesses started seeing the value of security improvements. The same is happening now with generative AI, not necessarily because all of the technology is new, but because the number of use cases has massively expanded since the rise of ChatGPT.
“What they [businesses] really want is autonomous agents giving them access to a super-smart workforce that can work all hours of the day without running a salary,” Markstedter said. “So our job is to understand the technology that is changing our systems and, as a result, our threats,” she said.
New technology comes with new security vulnerabilities
The first sign of a cat-and-mouse game being played between public use and security was when companies banned employees from using ChatGPT, Markstedter said. Organizations wanted to be sure employees using the AI chatbot didn’t leak sensitive data to an external provider, or have their proprietary information fed into the black box of ChatGPT’s training data.
SEE: Some variants of ChatGPT are showing up on the Dark Web. (TechRepublic)
“We could stop here and say, you know, ‘AI is not gonna take off and become an integral part of our businesses, they’re clearly rejecting it,'” Markstedter said.
Except businesses and enterprise software vendors didn’t reject it. So, the newly developed market for machine learning as a service on platforms such as Azure OpenAI needs to balance rapid development and conventional security practices.
Many new vulnerabilities come from the fact that generative AI capabilities can be multimodal, meaning they can interpret data from multiple types or modalities of content. One generative AI might be able to analyze text, video and audio content at the same time, for example. This presents a problem from a security perspective because the more autonomous a system becomes, the more risks it can take.
SEE: Learn more about multimodal models and the problems with generative AI scraping copyrighted material (TechRepublic).
For example, Adept is working on a model called ACT-1 that can access web browsers and any software tool or API on a computer with the goal, as listed on their website, of ” … a system that can do anything a human can do in front of a computer.”
An AI agent such as ACT-1 requires security for internal and external data. The AI agent might read incident data as well. For example, an AI agent could get malicious code in the course of trying to solve a security problem.
That reminds Markstedter of the work hackers have been doing for the last 10 years to secure third-party access points or software-as-a-service applications that connect to personal data and apps.
“We also need to rethink our ideas around data security because model data is data at the end of the day, and you need to protect it just as much as your sensitive data,” Markstedter said.
Markstedter pointed out a July 2023 paper, “(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs,” in which researchers determined they could trick a model into interpreting a picture of an audio file that looks harmless to human eyes and ears, but injects malicious instructions into code an AI might then access.
Malicious images like this could be sent by email or embedded on websites.
“So now that we have spent many years teaching users not to click on things and attachments in phishing emails, we now have to worry about the AI agent being exploited by automatically processing malicious email attachments,” Markstedter said. “Data infiltration will become rather trivial with these autonomous agents because they have access to all of our data and apps.”
One possible solution is model alignment, in which an AI is instructed to avoid actions that might not be aligned with its intended objectives. Some attacks target modal alignment specifically, instructing large language models to circumvent their model alignment.
“You can think of these agents like another person who believes anything they read on the internet and, even worse, does anything the internet tells it to do,” Markstedter said.
Will AI replace security professionals?
Along with new threats to private data, generative AI has also spurred worries about where humans fit into the workforce. Markstedter said that while she can’t predict the future, generative AI has so far created a lot of new challenges the security industry needs to be present to solve.
“AI will significantly increase our market cap because our industry actually grew with every significant technological change and will continue growing,” she said. “And we developed good enough security solutions for most of our previous security problems caused by these technological changes. But with this one, we are presented with new problems or challenges for which we just don’t have any solutions. There is a lot of money in creating those solutions.”
Demand for security researchers who know how to handle generative AI models will increase, she said. That could be good or bad for the security community in general.
“An AI might not replace you, but security professionals with AI skills can,” Markstedter said.
She noted that security professionals should keep an eye on developments in the area of “explainable AI,” which helps developers and researchers look into the black box of a generative AI’s training data. Security professionals might be needed to create reverse engineering tools to discover how the models make their determinations.
What’s next for generative AI from a security perspective?
Generative AI is likely to become more powerful, said both Markstedter and Moss.
“We need to take the possibility of autonomous AI agents becoming a reality within our enterprises seriously,” said Markstedter. “And we need to rethink our concepts of identity and asset management of truly autonomous systems having access to our data and our apps, which also means that we need to rethink our concepts around data security. So we either show that integrating autonomous, all-access agents is way too risky, or we accept that they become a reality and develop solutions to make them safe to use.”
She also predicts that on-device AI applications on mobile phones will proliferate.
“So you’re going to hear a lot about the problems of AI,” Moss said. “But I also want you to think about the opportunities of AI. Business opportunities. Opportunities for us as professionals to get involved and help steer the future.”
Disclaimer: TechRepublic writer Karl Greenberg is attending Black Hat 2023 and recorded this keynote; this article is based on a transcript of his recording. Barracuda Networks paid for his airfare and accommodations for Black Hat 2023.
Thu, 10 Aug 2023 15:31:00 -0500en-UStext/htmlhttps://www.techrepublic.com/article/black-hat-2023-keynote/Killexams : Fine-Tuning, Prompt Engineering are Keys to Delivering Real Generative AI Solutions to Commercial Pharma Operations Today
In March, OpenAI announced the release of the Fine-Tuning API (Application Programming Interface), specifically designed to facilitate the fine-tuning process for language models. The new availability of the Open AI Fine-Tuning API makes using generative AI in commercial operations an imminent reality versus a theoretical prospect.
The Fine-Tuning API enables developers to adapt pretrained models to specific tasks or domains, providing more flexibility and customization.
In addition to OpenAI, various other platforms and frameworks, such as Hugging Face, Google’s TensorFlow, and Microsoft’s Azure Machine Learning, have incorporated fine-tuning as well as prompt engineering capabilities into their Natural Language Processing (NLP) offerings. By harnessing the capabilities of these techniques, life sciences organizations can gain a competitive edge, accelerate decision-making, and enhance customer engagement.
Fine-tuning enhances content development
Fine-tuning takes a pre-trained language model, such as GPT-3.5, and adapts it to perform specific tasks or understand domain-specific information.
For pharmaceutical commercial operations, fine-tuning can be used to create hyper-personalized content to support brand managers, customer experience professionals, sales teams, and medical science liaisons, and alleviate some of the heavy lifting it has traditionally taken to create, organize, and analyze customized content.
In addition, fine-tuned models can scour the web and social media platforms to gather information about competitors, industry trends, and market dynamics. By monitoring relevant conversations, pharmaceutical companies can stay current with the latest developments, identify new opportunities, and make informed strategic decisions.
For example, with fine-tuning, generative AI can nearly instantaneously create initial drafts of copy for a variety of customer channels. While this content will still require editing and refinement by marketing teams, as well as the required medical/legal/regulatory review, AI can help to expedite content development and empower marketing teams to become more efficient and focused.
Another capability that is now possible through fine-tuning is refined content tagging. Content tagging has always been a tremendously laborious task. Auto-tagging helps the right content get delivered to the appropriate customers. Generative AI has already proven its ability to automate content tagging, which saves creators tremendous time and effort, as well as reduces the possibility of mistakes.
Here’s an example: Let’s take approved email templates and key messages as our specific marketing content. Fine-tuning models can target specific domains and generate prompts to tag the content into topics. These content Topics can then be used to generate message “sets” that contain information relevant for a specific channel and topic. These message sets can then be used to support marketing campaigns, generating specific content recommendations based on a topic.
Prompt engineering delivers deeper HCP insights
Prompt engineering can also play an important role in pharmaceutical commercialization by enabling companies to leverage AI models for various operations. By carefully designing prompts, companies can utilize AI models to transcribe and summarize interactions between representatives and healthcare professionals, providing valuable insights and analytics. This helps in managing customer relationships by capturing preferences, interests, concerns, and action items more efficiently.
Additionally, prompt engineering serves as a companion to fine-tuning, optimizing model performance through the thoughtful formulation of prompts. It guides the fine-tuning process, enhancing the model’s output and aligning it with specific objectives.
With prompt engineering, pharmaceutical sales teams can benefit from personalized recommendations and richer insights. For instance, by using prompts like “Identify key opinion leaders in the field of lung cancer,” AI models extract relevant information to support targeted engagement strategies. This leads to more effective marketing campaigns and higher sales conversion rates while also improving the customer experience for doctors. As a result, too, medical science liaisons and individual sales representatives can develop closer, long-term relationships with top physicians.
Generative AI further enhances the value of prompt engineering by synthesizing important insights from various interactions, including text, images, compliant transcripts, and call notes. By automating this process, generative AI quickly transforms on-the-ground insights into robust analytics. This improves Next Best Action (NBA) recommendations from intelligence engines, leading to stronger and more meaningful relationships with healthcare professionals.
Here’s an example: A prompt like “summarize key discussion points and action items from the meeting with Dr. Jones” can guide the AI model to extract relevant information and generate a concise summary of the meeting. The summary can highlight important Topics discussed, any action items agreed upon, and specific areas of interest expressed by Dr. Jones.
With prompt engineering, commercial and medical teams get accurate and organized information about their interactions with HCPs in near real-time. This empowers them to follow up with personalized and targeted communication, providing relevant information and resources that address the specific needs and interests of each HCP.
In addition, prompt engineering can also support the analysis of a large volume of interactions, allowing AI models to identify patterns and trends across multiple HCP engagements. By formulating prompts that capture specific metrics or parameters, such as “analyze prescribing patterns among HCPs in a specific region,” the AI model can extract data and generate insights that help identify opportunities for improved engagement and tailored support.
A new era of generative AI is here
Fine-tuning and prompt engineering have ushered in a new era of generative AI advantages for commercial operations at pharmaceutical companies. Supporting APIs and application frameworks make it possible to fully leverage these novel techniques and provide an immediate and concrete way for teams to benefit from generative AI including optimizing sales and marketing efforts, developing better relationships with healthcare professionals, and capturing a deeper understanding of customers.
As the field of AI continues to advance, further advancements in fine-tuning and prompt engineering are anticipated. They also offer a glimpse into the long-term transformative capabilities of generative AI. With continued exploration and integration of these techniques, commercial pharma operations can unlock even greater value, accelerate innovation, and navigate the evolving landscape with confidence.
Photo: Stas_V, Getty Images
Tue, 15 Aug 2023 01:00:00 -0500en-UStext/htmlhttps://medcitynews.com/2023/08/fine-tuning-prompt-engineering-are-keys-to-delivering-real-generative-ai-solutions-to-commercial-pharma-operations-today/Killexams : Nvidia upgrades Omniverse for OpenUSD framework and generative AI
We're thrilled to announce the return of GamesBeat Summit Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here.
Nvidia said the Omniverse platform will leverage the OpenUSD framework and generative AI (AI) to accelerate the creation of virtual worlds and advanced workflows for industrial digitalization.
Nvidia rolled out a bunch of announcements related to its Omniverse platform, which is unifying platform for the industrial metaverse, at the Siggraph computer graphics event in Los Angeles.
Nvidia CEO Jensen Huang made a bunch of announcements about RTX workstation graphics chips, generative AI tools, and Nvidia’s contributions to the Open Universal Scene Description (OpenUSD) 3D file format for open and extensible 3D worldbuilding.
Rev Lebaredian, vice president of Omniverse and Simulation Technology at Nvidia, said in an interview with VentureBeat that said that generative AI is going to provide a big boost to Omniverse.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
“Nvidia AI and Omniverse are two are distinct platforms,” Lebaredian said. “But they’re linked. They support each other. We can’t predict exactly when AI is going to be smart enough to do some of the things we want. There’s no way to know. It’s all research. We’re pleasantly surprised that the large language models and things that they’re doing with ChatGPT show the world happened a little bit earlier than most people expected. And so now we’re harnessing that for for Omniverse.”
The marriage of AI and Omniverse
Nvidia CEO Jensen Huang shows off OpenUSD.
Lebaredian said that with AI today, the big change is that large language models are encapsulating a lot of knowledge and it seems like AI is understanding what humans are searching for.
“This changes everything for USD and what we’re doing with 3D,” he said. “One of the fundamental changes is the ability to discern natural language and a human’s intent. Not a lot of people in the world have a deep understanding of how a computer works and how computer languages work. Not a lot of people in the world can write software programs. But what we’re seeing with ChatGPT, and in all these other models, is that they’re actually quite good at writing software, which democratizes the ability to program. It feels like it’s not going to be long until virtually everyone who has access to a computer is essentially going to be a programmer. You just tell it what to do.”
“With the creation of 3D virtual worlds, this is invaluable,” Lebaredian said. Being able to write programs that that assist you in generating the worlds is awesome. So what we announced there is where we’re working on training a model that specialized with USD with code that calls into the USD API’s. We call those code snippets. So that’s so that anyone could potentially become a developer for USD.”
He said you can use prompts to do things like find all objects in the scene. You can tell it to find all objects in the scene that are larger than a certain amount or have metallic material. Normally, you’d have to write a Python script or some C++ to do something like that.”
Omniverse upgrades
Notable updates include Omniverse USD Composer.
Omniverse, Nvidia’s platform for building and connecting 3D tools, received a major upgrade. New connectors and advancements showcased in Omniverse foundation applications enhance the platform’s efficiency and user experience. Notable updates include Omniverse USD Composer, which allows users to assemble large-scale, OpenUSD-based scenes, and Omniverse Audio2Face, which provides generative AI APIs for realistic facial animations and gestures.
Popular applications such as Cesium, Convai, Move AI, SideFX Houdini, and Wonder Dynamics are now seamlessly connected to Omniverse via OpenUSD, expanding the platform’s capabilities.
Rev Lebaredian, vice president of Omniverse and Simulation Technology at Nvidia, said in an interview with GamesBeat that there’s growing demand for connected and interoperable 3D software ecosystems among industrial enterprises.
He emphasized that the latest Omniverse update empowers developers to harness generative AI through OpenUSD, enhancing their tools. Moreover, the update enables enterprises to build larger, more complex world-scale simulations, serving as digital testing grounds for industrial applications.
Key improvements to the Omniverse platform include enhancements to Omniverse Kit, which serves as the engine for developing native OpenUSD applications and extensions. Moreover, Nvidia introduced the Omniverse Kit Extension Registry, a central repository for accessing, sharing, and managing Omniverse extensions.
This registry empowers developers to easily customize their applications by enabling them to turn functionalities on and off. Additionally, extended-reality developer tools were introduced, enabling users to incorporate spatial computing options into their Omniverse-based applications.
With over 600 core Omniverse extensions provided by Nvidia developers can now build custom applications with greater ease, enabling modular app building.
The Omniverse update also brings new developer templates and resources that provide a headstart for developers starting with OpenUSD and Omniverse, requiring minimal coding to get started. Rendering optimizations have been implemented to fully leverage the capabilities of Nvidia’s Ada Lovelace architecture enhancements in Nvidia RTX GPUs. The integration of DLSS 3 technology into the Omniverse RTX Renderer and the addition of an AI denoiser enables real-time 4K path tracing of massive industrial scenes.
Another key highlight of the update is the native RTX-powered spatial integration, which allows users to build spatial-computing options directly into their Omniverse-based applications. This integration provides users with flexibility in experiencing their 3D projects and virtual worlds.
The Omniverse platform update includes upgrades to various foundation applications, which serve as customizable reference applications for creators, enterprises, and developers. One such application is Omniverse USD Composer, which enables users to assemble large-scale OpenUSD-based scenes. Omniverse Audio2Face, another upgraded application, offers access to generative AI APIs for creating realistic facial animations and gestures from audio files, now including multilingual support and a new female base model.
New Omniverse partners
Nvidia’s Omniverse partners include major car makers.
Several customers have already embraced Omniverse for various digitalization tasks. Boston Dynamics AI Institute is utilizing Omniverse to simulate robots and their interactions, facilitating the design of novel robotics and control systems.
Continental, a leading company in automotive and autonomous systems, leverages Omniverse to generate physically accurate synthetic data at scale for training computer-vision AI models and performing system-integration testing in its mobile robots business.
Volvo Cars has transitioned its digital twin to be OpenUSD-based, providing immersive visualizations to aid customers in making online purchasing decisions. Marks Design, a brand design and experience agency, has adopted Omniverse and OpenUSD to streamline collaboration and enhance animation, visualization, and rendering workflows.
The latest release of the Omniverse platform is currently available in beta for free get and testing. Nvidia plans to launch the commercial version of Omniverse in the coming months, offering subscription plans and enterprise support to meet the needs of businesses and organizations.
With the major update to Omniverse, Nvidia aims to empower developers, creators, and industrial enterprises with advanced 3D pipelines and generative AI capabilities.
The platform’s integration with popular applications, improved developer resources, and expanded ecosystem partnerships are set to drive innovation in the fields of industrial digitalization, robotics, autonomous systems, computer graphics, and more
More announcements
Nvidia Picasso is a foundry for digital art using generative AI.
In a major leap towards unlocking the next era of 3D graphics, design, and simulation, Nvidia formally announced its participation in the Alliance for OpenUSD, joining forces with industry giants Pixar, Adobe, Apple, and Autodesk. This collaboration ensures compatibility in 3D tools and content across industries, paving the way for digitalization. (Intel and Advanced Micro Devices have yet to join this effort).
Nvidia also launched three new desktop workstation Ada Generation GPUs: the NVIDIA RTX 5000, RTX 4500, and RTX 4000. And Shutterstock, a creative platform for image creators, unveiled its utilization of generative AI in 3D scene backgrounds.
Leveraging Nvidia Picasso, a cloud-based foundry for building visual generative AI models, Shutterstock trained a foundation model that can generate photorealistic, 8K, 360-degree high-dynamic-range imaging (HDRi) environment maps.
This development enables quicker scene development for artists and designers. Additionally, Autodesk announced its integration of generative AI content-creation services, developed using foundation models in Picasso, with its popular software Autodesk Maya.
Nvidia Studio Driver releases, which provide optimal performance and reliability for artists, creators, and 3D developers, also received an update. The August Nvidia Studio Driver includes support for updates to Omniverse, XSplit Broadcaster, and Reallusion iClone, ensuring peak reliability for users’ favorite creative applications.
Adobe and Nvidia strengthened their collaboration across Adobe Substance 3D, generative AI, and OpenUSD initiatives. They announced plans to make Adobe Firefly, Adobe’s family of creative generative AI models, available as APIs in Omniverse, thereby enhancing the design processes of developers and creators.
The new Nvidia RTX 5000, RTX 4500, and RTX 4000 Ada Generation professional desktop GPUs, featuring Nvidia’s Ada Lovelace architecture, deliver improved rendering, real-time interactivity, and AI performance in 3D applications.
These GPUs incorporate third-generation RT Cores for enhanced image processing and fourth-generation Tensor Cores for AI training and development. With large GPU memory and advanced video encoding capabilities, the GPUs are tailored to meet the demands of high-end creative workflows.
During the event, Nvidia artist Andrew Averkin showcased his work, “Natural Coffee,” which exemplified the fusion of art and technology. Averkin utilized AI to generate visual ideas and employed Nvidia’s GPUs and Omniverse USD Composer for efficient 3D modeling, lighting, and scene composition. The integration of these tools significantly reduced the time required for creating immersive 3D scenes.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.
Tue, 08 Aug 2023 11:00:00 -0500Dean Takahashien-UStext/htmlhttps://venturebeat.com/ai/nvidia-upgrades-omniverse-to-for-openusd-framework-and-generative-ai/Killexams : Matteo Papadopoulos, DatoCMS: “agencies and developers need to step outside their comfort zones”
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
Wed, 23 Aug 2023 19:33:00 -0500en-UStext/htmlhttps://cybernews.com/security/matteo-papadopoulos-datocms-agencies-and-developers-need-to-step-outside-their-comfort-zones/Killexams : Critics question how climate-friendly an Appalachian ‘blue’ hydrogen hub will be
Critics say a pair of proposals to make Appalachian Ohio part of regional hydrogen hubs is likely to benefit the state’s oil and gas industry more than the climate.
The two proposals are among 21 projects competing for shares of a $7 billion pot of grant money under the 2021 Bipartisan Infrastructure Law. The law defines hydrogen hubs as networks of clean hydrogen producers, their potential consumers and infrastructure connecting them. At least one of the winning projects is to be a “blue” hydrogen hub, meaning it would make hydrogen from fossil fuels with carbon capture, storage and possible reuse, or CCUS.
The Appalachian Regional Clean Hydrogen Hub plans to collect methane from a web of natural gas pipelines in Ohio, West Virginia, Pennsylvania and Kentucky for a hydrogen production facility in West Virginia. The ARCH2 coalition includes Battelle, natural gas industry companies, the state of West Virginia, and more.
The Decarbonization Network of Appalachia, or DNA H2Hub, has the economic development group Team Pennsylvania as its project lead and is also proposing a blue hydrogen hub for Pennsylvania, West Virginia and Ohio. Equinor and Shell are among the group’s corporate partners.
Because both hubs would use methane from the region as feedstocks, they represent potentially large customers for the natural gas industry.
“We believe there are opportunities for the industry in a regional hub or hydrogen ecosystem and that Appalachia is more suited than most areas because of our compactness, access to natural gas and manufacturing infrastructure,” said Rob Brundrett, president of the Ohio Oil & Gas Association. “There certainly would be a benefit, especially the role natural gas plays in the creation of blue hydrogen, but we think it is too early to tell exactly what and how much benefit it may be to the industry.”
Much will depend on how hydrogen from the hubs will be used, whether it will displace other current uses of methane, and overall costs and market prices for natural gas. Rough estimates from the Ohio Oil & Gas Association are that latest production has gone in equal shares to power generation, heat and chemicals.
On the high end, blue hydrogen hubs might increase natural gas consumption and industry revenues. On the low end, sales to hydrogen hubs could offset potential losses if other uses decrease as a result of the energy transition.
Hydrogen production with natural gas and capture of carbon emissions from burning natural gas have gone on for decades, said policy advisor Rachel Fox at the American Petroleum Institute. Current U.S. hydrogen production is approximately 10 million metric tons per year, she said.
“The new challenge and opportunity is to scale these two complementary technologies together,” Fox continued. “API and our members are excited about the H2Hubs program and the impact it could have on the growth of a low-carbon hydrogen economy.” She said the industry has shown 65% to 90% carbon capture rates are commercially achievable.
‘A risky gamble’
As a decarbonization strategy, a blue hydrogen hub would be “a really energy-intensive, really water-intensive thing that commits that sector to being fossil-based forever, essentially,” said Emily Grubert, an energy policy expert at the University of Notre Dame.
It’s unclear whether blue hydrogen “would even result in a net reduction of carbon emissions,” said Ben Hunkler, communications manager for the Ohio River Valley Institute. In a 2022 analysis, he said a blue hydrogen hub would be “a risky gamble,” whose costs likely outweigh environmental benefits when compared with other options, such as renewable energy.
Although industry and government “now talk about carbon capture as having been proven, it really hasn’t,” said David Schlissel, director of resource planning and analysis for the Institute for Energy Economics and Financial Analysis. There hasn’t been any long-term, large-scale demonstration of its effectiveness over the time frame when promoters expect blue hydrogen hubs to operate.
Hydrogen can also leak, especially because its molecules are so small. “We think it leaks everywhere, but there’s no commercially available technology that can measure hydrogen leakage,” Schlissel said. Leaked hydrogen could prolong methane’s impacts in the atmosphere, researchers reported in Nature Communications last December.
Notably, both the Ohio Oil & Gas Association and the American Petroleum Institute have commented against the U.S. Environmental Protection Agency’s proposed rules that would effectively require carbon capture and storage for fossil fuel-fired power plants.
The ability to outfit power plants with carbon capture equipment isn’t advanced enough to be feasible yet, Brundrett said. “Therefore, at this time we would not encourage any mandates regarding a technology that isn’t available to the scale required by the rules.”
It’s unclear how the CCUS technology for a power plant would differ from that for a hydrogen production facility. Brundrett said the technology “has a promising future, and we will remain engaged in the hydrogen hub process with the hope that Appalachia is able to utilize our natural advantages if awarded by the federal government.”
A ‘moon shot’
For now, chances seem good that at least one of the projects will get funding. The Bipartisan Infrastructure Act requires at least two regional clean hydrogen hubs to be in places with “the greatest natural gas resources.” Separate provisions let the Appalachian Regional Commission provide grants and technical assistance for a regional hydrogen hub.
The federal funding is meant to act like a “moon shot,” to quickly ramp up clean hydrogen production.
“The reality is that we believe that there’s a near-term climate need that we need to be addressing, [and] that we need to think about how quick can we bring one of these technologies or a lot of these technologies to the marketplace,” said Thomas Murphy, senior managing director for strategic energy initiatives at Team Pennsylvania, during a webinar presented this summer by Appalachian Energy Future, an industry-led alliance promoting hydrogen hubs.
The DOE initiative aims to “[drive] down the cost of getting new technologies into the market,” said Grant Goodrich, who heads the Great Lakes Energy Institute at Case Western Reserve University. “You’re increasing market readiness and market demand.”
And while scaled commercial carbon capture and storage technologies don’t yet exist and can’t operate without government support, the Department of Energy’s hydrogen hub initiative could jumpstart a hydrogen economy for hard-to-electrify uses, such as high-heat industrial processes, heavy-duty transportation, or aviation, Goodrich said. That in turn might lead to effective carbon capture for other hard-to-decarbonize industries that produce greenhouse gases, such as the cement industry.
The DOE guidelines also call for projects to track how clean their processes turn out to be, Goodrich said. That should provide some accountability.
DOE’s decisions on the grant applications could come before the end of the year. DOE will also spend $1 billion to develop demand for hydrogen from the hubs, the agency announced in July.
Mon, 21 Aug 2023 04:03:00 -0500en-UStext/htmlhttps://energynews.us/2023/08/21/critics-question-how-climate-friendly-an-appalachian-blue-hydrogen-hub-will-be/Killexams : Multi-Asset Risk System (MARS) APINo result found, try new keyword!Built on top of Bloomberg’s Server API (SAPI) and B-PIPE platform, MARS API provides over-the-counter derivative creation, pricing, Greeks calculation, stress testing, scenario analysis and ...Wed, 07 Sep 2022 10:42:00 -0500entext/htmlhttps://www.bloomberg.com/tosv2.html?vid=&uuid=42ff36bd-425d-11ee-936f-57664a416c6c&url=L3Byb2Zlc3Npb25hbC9wcm9kdWN0L211bHRpLWFzc2V0LXJpc2stc3lzdGVtLW1hcnMtYXBpLw==API-580 exam dump and training guide direct download Training Exams List