Marissa Robert graduated from Brigham Young University with a degree in English language and literature. She has extensive experience writing marketing campaigns and business handbooks and manuals, as well as doing freelance writing, proofreading and editing. While living in France she translated manuscripts into English. She has published articles on various websites and also periodically maintains two blogs.
Because of climate change, we are experiencing far more natural disasters than ever before in my lifetime. Yet we still seem to be acting as if each disaster is a unique and surprising event rather than recognizing the trend and creating adequate ways to mitigate or prevent disasters like we just saw in Hawaii.
From how we approach a disaster to the tools we could use but are not using to prevent or reduce the impact, we could better assure ourselves that the massive damage incurred won’t happen again. Still, we continually fail to apply what we know to the problem.
How can we Strengthen our approach to dealing with disasters like the exact Maui fire? Let’s explore some potential solutions this week. Then we’ll close with my Product of the Week, a new all-in-one desktop PC from HP that could be perfect for anyone who wants an easy-to-set-up-and-use desktop computing solution.
The response to a disaster recovery should follow a process where you first rescue and save the living and then analyze what happened. From that, you develop and implement a plan to make sure it never happens again. As a result of that last phase, you remove people from jobs they have proven unable to do, but not necessarily those that were in key positions when the disaster happened.
Instead, we tend to jump to blame almost immediately, which makes the analysis of the cause of a disaster very difficult because people don’t like to be blamed for things, especially when they couldn’t have done anything differently.
Generative AI could help a great deal by driving a process that focuses on the aspects of mitigating the problem that would have the most significant impact on saving lives both initially and long-term rather than focusing on holding people accountable.
Other than restrictions this puts on analyzing the problem, focusing on blame often stops the process once people are indicted or fired as if the job is done. But we still must address the endemic causes of the issue. Someone who has been through this before is probably better able to prioritize action should the problem arise again. So, firing the person in charge with this experience could be counterproductive.
Generative AI, acting as a dynamic policy — one that could morph to address a wide range of disaster variants best — could provide directions as to where to focus first, help analyze the findings, and, if properly trained, recommend both an unbiased path of action and a process to assure the same thing didn’t happen again.
One of the problems with disasters is that those working to mitigate them tend to be under-resourced. When disaster mitigation teams devise a plan, they often face rejection due to the government’s unwillingness to pay for the implementation costs.
Had the power company in Hawaii been told that if they didn’t bury the power lines or at least power them down, they’d go out of business, one of those two things would have happened. But they didn’t because they didn’t do risk/reward analysis well.
All of this is easy for me to say in hindsight. Still, with tools like Nvidia’s Omniverse, you can create highly accurate and predictive simulations which can visibly show, as if you were in the event, what would happen in a disaster if something was or were not done.
Is Hawaii likely to have a high-wind event? Yes, because it’s in a hurricane path and has a history of high wind events. So, it would make sense to run simulations on wind, water, and tsunami events to determine likely ways to prevent extreme damage.
The answer could be something as simple as powering down the grid during a wind event or moving the electrical wiring underground if powering down the grid was too disruptive.
In addition, you can model evacuation routes. We know that if too many people are on the road at once, you get gridlock, making it difficult for anyone to escape. You must phase the evacuation to get the most people out of an area and prioritize getting out those closest to the event’s epicenter first.
But as is often the case, those farthest from the event have the least traffic, and those closest are likely unable to escape, which is clearly a broken process.
Through simulation and AI-driven communications, you should be able to phase an evacuation more effectively and ensure the maximum number of people are made safe.
Another significant issue when managing disasters is communications.
While Cisco typically rolls trucks into disaster areas to restore communications as part of the company’s sustainability efforts, it can take days to weeks to get the trucks to a disaster, making it critical that the government has an emergency communication platform that will operate if cell towers are down or have hardened the cell towers, so they don’t go down.
Interestingly, during 9/11, all communication was disrupted in New York City because there was a massive communications hub under the towers that failed when they collapsed. What saved the day was BlackBerry’s two-way pager network that remained up and working. In our collective brilliance, instead of institutionalizing the network that stayed up, we discontinued it and now don’t have a network that will survive the disasters we see worldwide.
It’s worth noting that BlackBerry’s AtHoc solution for critical event management would have been a huge help in the response to this latest disaster on Maui.
Again, simulation can showcase the benefits of such a network and re-establishing a more robust communications network that will survive an emergency since most people no longer have AM radios, which used to be a reliable way to get information in a disaster.
Finally, autonomous cars will eventually form a mesh network that could potentially survive a disaster. Using centralized control, they could be automatically routed out of danger areas using the fastest and safest routes determined by an AI.
We usually rebuild after a disaster, but we tend to build the same types of structures that failed us before, which makes no sense. The exception was after the great San Francisco earthquake in 1906, which was the impetus for regulations to Strengthen structures to withstand strong quakes.
In a fire area, we should rebuild houses with materials that could survive a firestorm. You can build fire-resistant homes using metal, insulation, water sprinklers, and a water source like a pool or large water tank. It would also be wise to use something like European Rolling Shutters to protect windows so that you could better shelter in place rather than having to evacuate and maybe getting caught on the road by the fire.
With insurance companies now abandoning areas that are likely to be at high risk, this building method will do a better job of assuring people don’t lose most or all of their belongings, family, or pets.
Again, simulation can showcase how well a particular house design could survive a disaster. In terms of rebuilding on Maui, 3D-printed houses go up in a fraction of the time and are, depending on the material used, more resistant to fire and other natural disasters.
One of the issues with floods and fires is the need to move large volumes of water quickly. While the scale of the vehicle needed to deal with floods may be unachievable near-term, carrying enough water to douse a fire quickly that was still relatively small is not.
We’ve been talking about bringing back blimps and dirigibles to move large objects for some time. Why not use them to carry water to fires rapidly? We could use AI technology to automate them so that if the aircraft has an accident, it doesn’t kill the crew. AI can, with the proper sensor suite, see through smoke and navigate more safely in tight areas, and it can act more rapidly than a human crew.
Much like we went to extreme measures to develop the atomic bomb to end a war, we are at war with our environment yet haven’t been able to work up the same level of effort to create weapons to fight the growing number of natural disasters.
We could, for instance, create unique bombers to drop self-deploying lightning rods in areas that are hard to reach to reduce the number of fires started by lightning strikes. The estimate I’ve seen suggests you’d need 400 lightning rods per square mile to do this, but you could initially just focus on areas that are difficult to reach.
You could use robotic equipment and drones to place the lightning rods on trees or drop them from bombers to reduce the roughly $100-per-rod purchase and installation cost at volume.
The real problem is that we aren’t taking these disasters seriously enough to prevent them. We seem to treat each disaster as a unique and non-recurring event even though in areas like where I live, they are almost monthly now.
Once a disaster occurs, we have the option of either moving to a safer location or rebuilding using technology that will prevent our home from being destroyed. Currently, most of us do neither and then complain about how unfair it is that we’ve had to experience that disaster again.
Given how iffy insurance companies are becoming about these disasters, I’m also beginning to think that spending more money on hardening and less on insurance might result in a better outcome.
While AI could contribute here, developers haven’t yet trained it on questions like this. Maybe it should be. That way, we could ask our AI what the best path forward would be, and its answer wouldn’t rely on the vendors to which it’s tied, political talking points, or other biased sources. Instead, it would base its response on what would protect us, our loved ones, and our assets. Wouldn’t that be nice?
My two favorite all-in-one computers were the second-generation iMac, which looked like the old Pixar lamp, and the second-generation IBM NetVista.
I liked the Apple because it was incredibly flexible in terms of where you could move the screen, and the IBM because, unlike most all-in-ones, you could upgrade it. Sadly, both were effectively out of the market by the early 2000s.
Since then, the market has gravitated mainly toward the current generation iMac, where you have the intelligence behind the screen, creating a high center of gravity and a lower build cost. In my opinion, this design creates a significant tip-over risk if the base is too light — as it is in the current iMac.
The HP EliteOne 870 G9 has a wide, heavy base which should prevent it from toppling if bumped, Bang and Olufsen sound (which filled up my test room nicely), a 12th Gen Intel processor, 256GB SSD, 8GB of memory, and an awesome 27-inch panel.
Unlike earlier designs, it has a decent built-in camera that doesn’t hide behind the monitor. In practice, I think this is a better solution because it’s less likely to break.
The HP EliteOne 870 G9 27-inch All-in-One PC is a versatile desktop solution. (Image Credit: HP)
As with most all-in-ones, the 870 G9 uses integrated Intel graphics, so it isn’t a gaming machine. Still, it’s suitable for those who might do light gaming and mostly productivity work, web browsing, and videos. The game I play most often ran fine on it, but it is an older title.
The screen is a very nice 250 nit (good for indoors only), FHD, and IPS display. Also, as with most desktop PCs, the mouse and keyboard are cheap, but most of us use aftermarket mice and keyboards anyway, so that shouldn’t be a problem. The base configuration costs around $1,140, which is reasonable for a 27-inch all-in-one.
A fingerprint reader is optional, but I found Microsoft Hello worked just fine with the camera, and I like it better. The installation consists of two screws to secure the monitor arm to the base, and then the monitor/PC just snaps onto the arm. This all-in-one is a vPro machine which means it will comply with most corporate policies. At 24 pounds, it is easy to move from room to room, but no one will mistake this for a truly mobile computer.
The PC has a decent port-out with 2 USB type Cs, 5 USB type As, and a unique HDMI-in port in case you want to connect a set-top box, game system, or other video source and use it as a TV, so it is a decent option for a small apartment, dorm, or kitchen where a TV/PC might be useful.
Clean design, adequate performance, and truly awesome sound make the HP EliteOne 870 G9 a terrific all-in-one PC — and my Product of the Week.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.
Rear- or all-wheel drive is available. There's also a punchy 375-hp, 3.5-liter turbo V6. New for 2023 is a fully electric version with 365 hp from dual motors and an EPA-rated 282-mile range. The ride is plush, handling is sharp, and braking is top-notch. Fit and finish is impressive yet understated. The cabin is roomy, and the seats are comfortable in the front and rear. However, the infotainment system is overcomplicated, and the unintuitive gear selector is tricky to use. Standard active safety features include AEB with pedestrian detection, BSW, and RCTW.More
Updates to the Omniverse platform include:
Huang also announced a broad range of frameworks, resources and services for developers and companies to accelerate the adoption of Universal Scene Description, including contributions such as geospatial data models, metrics assembly and simulation-ready, or SimReady, specifications for OpenUSD. Frankly, the momentum behind the specification is starting to look unstoppable.
Huang also announced four new Omniverse Cloud APIs built by NVIDIA for developers to more seamlessly implement and deploy OpenUSD pipelines and applications.
ChatUSD — Assisting developers and artists working with OpenUSD data and scenes, ChatUSD is a large language model (LLM) agent for generating Python-USD code scripts from text and answering USD knowledge questions.
RunUSD — a cloud API that translates OpenUSD files into fully path-traced rendered images by checking compatibility of the uploaded files against versions of OpenUSD releases, and generating renders with Omniverse Cloud.
DeepSearch — an LLM agent enabling fast semantic search through massive databases of untagged assets.
USD-GDN Publisher — a one-click service that enables enterprises and software makers to publish high-fidelity, OpenUSD-based experiences to the Omniverse Cloud Graphics Delivery Network (GDN) from an Omniverse-based application such as USD Composer, as well as stream in real time to web browsers and mobile devices.
Huang bills the NVIDIA AI Workbench as a unified, easy-to-use toolkit to quickly create, test and fine-tune generative AI models on a PC or workstation — then scale them up to operate in virtually any data center, or public cloud
The idea is that the Workbench removes the complexity of getting started with an enterprise AI project, and allows developers to easily fine-tune models from popular repositories. Hundreds of thousands of pretrained models are already available on the rapidly mushrooming AI market, and customizing them with the many open-source tools available can be challenging and time consuming to say the least.
The AI Workbench will provide a shortcut and leading AI infrastructure providers — the likes of Dell Technologies, Hewlett Packard Enterprise, HP Inc., Lambda, Lenovo and Supermicro — are onboard, as is startup Hugging Face, which already has 2 million users. All of this will put generative AI supercomputing at the fingertips of millions of developers building large language models and other advanced AI applications, reckons Huang. We await the results of that with what can only be described as a small amount of trepidation.
As we said, the speech was wide-ranging to say the least.
Huang also said that NVIDIA and global workstation manufacturers are announcing powerful new RTX workstations from the likes of BOXX, Dell Technologies, HP and Lenovo, based on NVIDIA RTX 6000 Ada Generation GPUs and incorporating NVIDIA AI Enterprise and NVIDIA Omniverse Enterprise software.
Separately, NVIDIA released three new desktop workstation Ada Generation GPUs — the NVIDIA RTX 5000, RTX 4500 and RTX 4000 — to deliver the latest AI, graphics and real-time rendering technology to professionals worldwide.
And, at the show’s Real Time Live Event, NVIDIA researchers have also been demonstrating a generative AI workflow that helps artists rapidly create and iterate on materials for 3D scenes, using text or image prompts to generate custom textured materials faster and with finer creative control than before.
Have a look at all of it below.
Since the Supra’s launch in 2019, enthusiasts have begged Toyota to add a manual transmission option. While the eight-speed automatic works well, we (and much of the enthusiast public) believed it was missing that extra layer of feedback and enjoyment.
Our wish was granted for the 2023 model year, and enthusiasts responded. Toyota says that since the manual option went on sale, 47 percent of Supras sold in America have been delivered with three pedals. While that’s not Porsche GT3 levels of demand, it’s still an exciting ratio; proof that the manual transmission still has a place in today’s market, even outside of six- and seven-figure supercars.
A lot of that initial demand likely comes from purists who will accept nothing less than six speeds and an H-pattern knob in the center console, no matter how it improves the driving experience. But considering just how much better the manual Supra drives versus the auto, we suspect that ratio will hold true as time goes on. The gearbox has a satisfying notchiness not present in any new BMW I’ve driven. The shifter itself—the knob and the stem attaching it to the rod—is thinner and nicer to hold. The gates are perfectly spaced, and while the throws aren’t the shortest I’ve felt, they’re still satisfying. Best of all, the gearbox is well-matched to the 382-hp BMW B58 straight-six’s torque delivery.
Like the rest of the Supra, the stick shift is the result of a collaboration between Toyota, BMW, and the company’s parts suppliers. The gearbox, codenamed GS6L50TZ for you BMW nerds, isn’t sourced from any existing car—it's a new item developed specifically for use in the Japanese sports coupe. Keisuke Fukumoto, assistant chief engineer for the Supra, told Road & Track the casing comes from the current-generation 3-Series which, up until recently, offered a manual option in Europe. But the gears, he says, are from the M3. The final gear ratio in the differential has been raised from 3.15 to 3.46 to promote better acceleration.
Toyota says it spent a great deal of time dialing in the Supra’s shift feel to set it apart from anything BMW (or any other brand) is offering right now, going to great lengths to develop a unique sensation.
“We wanted to make this vehicle very driver-oriented, it was something that we really were particular about,” Supra chief engineer Fumihiko Hazama told Road & Track. “We looked at things carefully when it came to the feeling of [the shifter] and it was very much a joint development with BMW.”
The engineering team brought together four manual-equipped benchmark cars to set a target for how the Supra’s shifter should feel: a Toyota GT86, a BMW M140i, a BMW M2 Competition, and a BMW Z4 (the base four-cylinder Z4 is available with a stick in Europe). Upon deciding the Supra’s shifter feel target, the team soon realized it couldn’t simply pull from BMW’s existing parts bin to achieve its goals.
“You can imagine some of the components were coming from luxury vehicles,” technical manager for vehicle performance management Herwig Daenens told Road & Track. “We had to change them to provide that GR shift feeling. For instance, the Supra has a very driver-oriented cabin, a cockpit. So it means that there is less space; you can have less shift travel. So we had to adapt the shift [linkage] to make sure that everything fit in this driver-oriented cockpit.”
Toyota could’ve just stopped there and popped a BMW shifter knob into the Supra and called it a day. But it didn’t, instead pouring as much time and effort as possible into creating the perfect contact point for drivers.
“When it comes to the knob, we looked at the weight very carefully, and also the shifting direction,” Hazama said. “The knob shape is something that we looked at very closely as well. The millimeter differences mattered to us in deciding that, and we also got a lot of feedback from the European [Toyota team], the BMW people, and also within [Toyota].”
The manual Supra’s software, too, was a big focus for the team. Every three-pedal Supra gets Toyota’s i-MT, or Intelligent Manual Transmission feature, as standard. Switch it on, and the Supra will rev-match for you on downshifts. BMW has a rev-matching feature of its own, but engineers insist this is an in-house piece of software developed for the Supra, rather than something borrowed from the Germans.
“[The i-MT] is something that we spent quite some time on tuning to the level that we deemed was okay for sports cars,” Daenens said. ‘This was not just a carry over from BMW. This was something that did not exist for the performance that we wanted.”
“We really looked closely at the shift speed,” Hazama added. “So whether you're going slow or fast, you wanted to deliver it a good sort of a quick rhythmical feel to match the rotation of the engine. So we did a lot of tuning around that as well.”
These changes didn’t come easily. The bulk of the manual Supra’s development and testing happened during the heart of the COVID-19 pandemic, forcing Toyota’s engineering teams in Europe and Japan to find clever ways to communicate and share ideas with each other, as well as BMW and its suppliers. Eventually they decided on having two separate development teams, one for Europe, and another for Japan. Each team would do their own testing on identical parts, then come together online to discuss their findings, rather than meet face-to-face.
“This whole thing was very new to us,” chief test driver Hisashi Yabuki told Road & Track. “We've never developed it in that way. So I think that actually, there's some positives that came out of it as well and perhaps we can utilize this new way of working together more in the future. So I think we were able to gain something by overcoming that challenge.”
“It was very challenging and a new way of working,” Daenens added. “Even now that COVID is finished we still implement some of these processes in our current development style because it is more efficient, and of course we learned from it.”
You Might Also Like