Exam CAT-020 Actual Questions are updated on daily basis

killexams.com provides validly, Latest, and 2022 updated CAT-020 Free PDF with Exam Braindumps Questions and Answers. Practice our CAT-020 test prep and Answers to Improve your insight about tips and deceives utilized by merchants and breeze through your CAT-020 test with High Marks. We ensure your achievement in the Test Center, covering every one of the references of CA eHealth r6 Professional test and assembling your Knowledge. Pass with our CAT-020 test prep.

Exam Code: CAT-020 Practice test 2022 by Killexams.com team
CA eHealth r6 Professional
CA-Technologies Professional test
Killexams : CA-Technologies Professional test - BingNews https://killexams.com/pass4sure/exam-detail/CAT-020 Search results Killexams : CA-Technologies Professional test - BingNews https://killexams.com/pass4sure/exam-detail/CAT-020 https://killexams.com/exam_list/CA-Technologies Killexams : Virtual reality can support and enhance outdoor environmental education

Through virtual reality, students can experience environmental processes that would otherwise be invisible to them. (Shutterstock)

The use of virtual reality (VR) and augmented reality (AR) for environmental education is controversial. Some are concerned that these technologies might replace or disrupt outdoor experiences that can connect students to nature and develop pro-environmental behaviours.

However, learning through technology and being outdoors aren’t mutually exclusive. When VR and AR are used effectively they can support and enhance environmental education while contributing to students’ positive well-being.

Access and connection to nature

Many nature locations are inaccessible to students due to distance, safety concerns, economic barriers or ability.

Access to ecologically sensitive areas like coral reefs or wetlands is limited in order to preserve them. VR can provide an alternative way to experience these locations.

Virtual technologies can also promote outdoor trips close to home and help students connect with global and local environmental issues. For example, research by virtual reality design expert Ana-Despina Tudor, with colleagues, used a 360-degree field trip of the Borneo rainforest to teach students about deforestation. Lessons were then applied to a local nature reserve being affected by railroad construction. Students worked with a local charity to help protect it.

Multiple points of view

Such research holds promise for those seeking to extend the connection between a sense of place and pro-environmental behaviour to regional, continental and global scales.

That means adopting eco-friendly attitudes that can minimize adverse effects on the natural environment wherever these effects occur.

“Wicked” or complex environmental problems require students to engage with multiple places and points of view. Improved access through virtual simulations may promote empathy and overcome inaction brought on by the psychological distance that students might feel towards nature hit hardest by climate change.

Read more: From the Amazon, Indigenous Peoples offer new compass to navigate climate change

Making the invisible visible

VR and AR lose much of their potential when they are only used to simulate outdoor environments. Instead, these technologies become transformative when students can experience environmental processes that would otherwise be invisible to them due to their scale or the timeframes over which changes occur.

Consider a virtual reality simulation known as the Stanford Ocean Acidification Experience. During this simulation, students experience the effects of a century’s worth of ocean acidification on reef biodiversity by moving “amid coral as it loses its vitality” and observing how increasingly acidic water affects marine life.

When researchers measured the effect of this simulation by comparing student test scores, they found that knowledge of ocean acidification increased by almost 150 per cent and was retained after several weeks.

Combining information sources

AR can be effective at combining different multimedia and information sources about environmental processes. Harvard researchers developed the AR tool EcoMOBILE to help middle-school students monitor water quality.

Students can play an augmented reality game designed to to engage students in learning about water ecosystems on a smartphone while being outdoors monitoring water.

The program resulted in high levels of engagement, and significant gains to understanding and problem-solving.

Critical environmental education

Compared to traditional modes of outdoor education, VR and AR can provide opportunities to include diverse knowledges.

Practitioners of critical approaches to environmental education may take this opportunity to engage with stories produced by marginalized communities about their experiences of nature and climate change.

Teachers can then engage students in self-reflection while highlighting broader issues surrounding social and environmental justice.

Engaging Indigenous knowledges

Camosun Bog 360 is a virtual tour of a local wetland in Vancouver, and is one example of this approach.

Community interviews with volunteers who are engaged in bog restoration, and videos produced by the Musqueam First Nation are embedded and linked throughout the field trip. This content is also available to students in-person using QR codes and their smartphones.

One of the authors of this story, Micheal, developed related resources in partnership with the Pacific Spirit Park Society and Camosun Bog Restoration Group to use in educational settings.

The goal of the field trip is to introduce students to creatures and plants, help them reflect on colonial histories of Camosun Bog, and encourage them to protect the bog through volunteerism.

However, care must be taken. As Métis/otipemisiw anthropologist Zoe Todd explains, Indigenous knowledges are too often filtered through white intermediaries. At stake is that Indigenous voices can be lost or distorted. It is vitally important that Indigenous people tell their own stories.

In the case of Camosun Bog 360, the Musqueam Teaching Kit provides guidance to the researcher. This kit, developed by the Musqueam First Nation, encourages students and teachers to learn about their culture, language and histories. It provides links, videos and other teaching materials for sharing with students.

Building environmental stewards

Those who are skeptical of whether VR and AR can support in-person outdoor education should consider the important role these technologies play in equipping students to navigate challenges today.

Indeed, skills like digital literacy, creative thinking, communication, collaboration and problem solving are more essential than ever as students transition to the professional world.

VR and AR can enable students to participate in solving complex environmental problems, present and future. A drawback is the rapid advancements in hardware, software and implementation: Schools can already be slow at implementing new technologies, due to both the time it takes to train instructors as well as economic and administrative barriers, and assessing how long an investment may seem worthwhile may be a consideration.

Read more: Investing in technologies for student learning: 4 principles school boards and parents should consider

The environmental stewards of tomorrow will need to adapt to the new tools researchers and professionals are using to understand, address and communicate wicked environmental problems. Without appropriate training and practice using these technologies, students could be put at a disadvantage as they enter higher education and the workforce.

Educators have a role in empowering students as stewards, such as finding new ways to include emerging technologies in environmental education.

This article is republished from The Conversation, a nonprofit news site dedicated to sharing ideas from academic experts. It was written by: Micheal Jerowsky, University of British Columbia and Ann Borda, The University of Melbourne.

Read more:

Micheal Jerowsky receives funding from the Social Sciences and Humanities Research Council (SSHRC).

Ann Borda does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Mon, 11 Jul 2022 03:33:00 -0500 en-CA text/html https://ca.news.yahoo.com/virtual-reality-support-enhance-outdoor-153338078.html
Killexams : Building a Drone that Resists the Wind as Well as a Bird Can: Animal Dynamics Works to Test the Stork in Realistic Environments Animal Dynamics StorkAnimal Dynamics Partners with University of Manchester for Advanced Virtual Wind Simulation

by DRONELIFE Staff Writer Ian M. Crosby

Bio-inspired autonomous systems developer Animal Dynamics has announced a collaboration with the University of Manchester for the improvement of its simulated environment in order to advance the commercial production of its Stork STM Uncrewed Aircraft System.

Continue practicing below, or listen:

Wind simulation within a virtual environment allows for the exposure of Uncrewed Aircraft Systems (UAS) to a variety of wind scenarios that would be too challenging or costly to recreate  in the real world. The advanced simulation software supplied by the Mechanical, Aerospace, and Civil Engineering Division at the University of Manchester allows Animal Dynamics to perform more accurate simulations with simulated wind data, providing a means of testing flight control strategies for take-off and landing under challenging wind conditions.

The introduction of this software is enabling Animal Dynamics to establish a flight control system capable of responding to local wind conditions in mere seconds, increasing the tolerance of UAS to extreme wind conditions and resulting in a safer airspace.

“Simulation holds the key to unlocking aerial autonomy. It is crucial that we are able to expose our systems to challenging environments in virtual worlds,” said Ian Foster, Head of Engineering at Animal Dynamics. “Thanks to the team at the University of Manchester we can now blend realistic wind data into our simulated environments, something that will accelerate our ability to be able to address the urgent, complex, and dangerous operational challenges across the globe.”

“Research is an important part of my role at the University of Manchester aside from teaching,” said Ben Parslew, Senior Lecturer in Aerospace Engineering at the University of Manchester. “I really enjoy working with innovative companies like Animal Dynamics to develop new technologies and new engineering products. In this particular project we only just scratched the surface of what is possible with simulation, so it is exciting to think about all the further things we can do together in the future.”

Animal Dynamics’ Stork STM is a heavy-lift powered parafoil able to transport a payload weighing 135 kg for up to 250 miles. Stork STM is designed to address critical and hazardous operational challenges, including providing humanitarian aid in crisis zones, improving emergency response strategies in inaccessible locations, enabling delivery in military settings, and developing sustainable agriculture solutions.

Stork STM will join Animal Dynamics’ previous UAS, the ST-25, designed for last-mile logistics with a payload drop capability of 10 kg over 40 km, infrastructure monitoring, and numerous LiDAR and other surveillance missions.

Read more about Animal Dynamics and other heavy lift last mile logistics solutions:

Ian attended Dominican University of California, where he received a BA in English in 2019. With a lifelong passion for writing and storytelling and a keen interest in technology, he is now contributing to DroneLife as a staff writer.

Mon, 18 Jul 2022 04:58:00 -0500 Miriam McNabb en-US text/html https://dronelife.com/2022/07/18/building-a-drone-that-resists-the-wind-as-well-as-a-bird-can-animal-dynamics/
Killexams : Make Room in the Garage: AIR ONE Personal eVTOL Successfully Completes Hover Test personal eVTOLAIR’s Personal Transport eVTOL Aircraft Completes First Hover Test

by DRONELIFE Staff Writer Ian M. Crosby

On June 21st, consumer eVTOL (electric vertical takeoff and landing) aircraft creator AIR successfully completed the first hover test of its AIR ONE aircraft following the acquisition of its airworthiness certificate.

Continue practicing below, or listen:

The test saw a full-scale AIR ONE prototype take flight in Megiddo, Israel, where it completed multiple hovers over the course of the day and the subsequent two weeks. The aircraft operated as intended, safely lifting off, hovering in place, and returning to the ground. Additionally, AIR ONE’s energy use proved its ability to meet expected performance metrics. The company intends to expand to full flight envelope testing throughout 2022.

Unlike the air taxis that make up much of the developing advanced air mobility (AAM) market, AIR’s personal transportation aircraft provides eVTOL flight for individual consumers. The all-electric AIR ONE is able to take off and land on any flat surface while carrying a 250kg payload, and features a practical range on a single charge at speeds reaching 155 mph. A compact two-seater, AIR ONE is able to be stored in most garages and driveways.

personal eVTOL

“It was truly awe-inspiring to watch AIR ONE lift off the ground for the first time. We’ve been on this upward journey for nearly five years and cannot wait for the public to join us on this ride,” said AIR CEO and Co-founder Rani Plaut. “This momentous milestone secures AIR’s spot as a market leader in the personal air mobility space, making the thrill of flight achievable on a daily basis. We look forward to continued growth as we launch into the next phase of development.”

The company is currently pursuing further strategic partnerships, both within the US and globally, in order to advance the development of infrastructure and policy related to the growing AAM space. AIR also continues to collaborate with the FAA to complete vehicle certification and establish guidelines for eVTOL pilot licensing.

In the time since AIR went public roughly a year ago, the company has successfully conducted drop testing, multiple propulsion tests, and a series of electronic and stability tests prior to AIR ONE’s first takeoff. Most recently, AIR also announced partnerships with air mobility companies FlyOnE, Espère AAM, and AeroAuto for the development of one of the country’s first showrooms for electric aircraft, as well as showcased the full-scale AIR ONE prototype at this year’s Kentucky Derby. Soon, AIR will also be holding an exhibit at EAA AirVenture Oshkosh 2022, from July 25-31, at booth #92.

Read more about AIR and urban air mobility vehicles:

Ian attended Dominican University of California, where he received a BA in English in 2019. With a lifelong passion for writing and storytelling and a keen interest in technology, he is now contributing to DroneLife as a staff writer.

Tue, 12 Jul 2022 05:50:00 -0500 Miriam McNabb en-US text/html https://dronelife.com/2022/07/12/make-room-in-the-garage-air-one-personal-evtol-successfully-completes-hover-test/
Killexams : TAAL COMPLETES AGREEMENT TO BRING 100 PH/S OF COMPUTING POWER ONLINE

FIRST IMMERSION COOLING DEPLOYMENT

TORONTO, July 13, 2022 /CNW/ - TAAL Distributed Information Technologies Inc. (CSE:TAAL) (FWB:9SQ1) (OTC:TAALF) ("TAAL" or the "Company"), a vertically integrated blockchain infrastructure and service provider for enterprise, announces its wholly owned operating subsidiary has entered into an agreement to acquire 968 Bitmain S19J Pro machines and host them with a subsidiary of LUXXFOLIO Holdings Inc. at a facility in New Mexico, representing an immediate increase of 100 petahash/s ("PH/s") of additional computing power. The machines will be immersion cooled and represent a first full immersion deployment and acts as a test bed ahead of final design plans for TAAL's flagship 50MW site in Grand Falls, New Brunswick which will come online during 2023. Details of the agreement include:

TAAL Distributed Information Technologies Inc. (CNW Group/Taal Distributed Information Technologies Inc.)

  • 968 Bitmain S19J Pro machines immediately hashing upon agreement inception

  • Miners will use immersion cooling to optimize performance

  • The machines come with a one-year warranty and will be hosted in a facility located in New Mexico powered by majority non-carbon emitting solar energy

  • Total of 100 Petahash/second

  • TAAL can mine across all three SHA-256 based blockchain networks - Bitcoin Core ("BTC"), BitcoinSV ("BSV"), Bitcoin Cash ("BCH") - switching chains economically and dynamically to optimize yield.

"With this additional capacity we continue to execute on our network rebalancing program and diversification strategy and build robustness across our mining fleet," said Richard Baker, CEO of TAAL. "With this deployment our mining hash centre operations are in three diversified locations in North America and underpin our long-term objective of building out the transaction infrastructure of the future. We remain focussed on our goal of reaching 2 EH/s of hash power at full deployment."

About LUXXFOLIO

LUXXFOLIO Holdings Inc. is a publicly traded, vertically integrated digital asset company based in Canada. It operates an industrial scale cryptocurrency mining facility in the United States powered predominately by renewable energy with a focus on the blockchain ecosystem and generation of digital assets. LUXXFOLIO provides a liquid alternative for exposure to digital assets for the broader capital markets.

About TAAL Distributed Information Technologies Inc.

TAAL Distributed Information Technologies Inc. delivers value-added blockchain services, providing professional-grade, highly scalable blockchain infrastructure and transactional platforms to support businesses building solutions and applications on the BSV platform, and developing, operating, and managing distributed computing systems for enterprise users. BitcoinSV Blockchain is the world's largest public blockchain by all major utility metrics, data storage, daily transaction volume, scaling ability, and average block size.

For more information please visit – www.taal.com/investors

The CSE, nor its Regulation Services Provider, accepts no responsibility for the adequacy or accuracy of this release.

CAUTIONARY STATEMENT REGARDING FORWARD-LOOKING INFORMATION

Certain statements included in this news release constitute "forward-looking information" as defined under applicable Canadian securities legislation. The words "will", "intends", "expects" and similar expressions are intended to identify forward-looking information, although not all forward-looking information will contain these identifying words. Specific forward-looking information contained in this news release includes but is not limited to statements regarding: the type, number and performance of machines that have been acquired, TAAL's future computing power and capacity; development plans and redeployment of activities in North America, geopolitical risks to operations and TAAL's business and strategic plans. These statements are based on factors and assumptions related to historical trends, current conditions and expected future developments. Since forward-looking information relates to future events and conditions, by its very nature it requires making assumptions and involves inherent risks and uncertainties. TAAL cautions that although it is believed that the assumptions are reasonable in the circumstances, these risks and uncertainties provide rise to the possibility that real results may differ materially from expectations. Material risk factors include the future acceptance of Bitcoin SV and other digital assets and risks related to information processing using those platforms, the ability for TAAL to leverage intellectual property into viable income streams and other risks set out in TAAL's Annual Information Form dated March 31, 2022, under the heading "Risk Factors" and elsewhere in TAAL's continuous disclosure filings available on SEDAR at www.sedar.com. Given these risks, undue reliance should not be placed on the forward-looking information contained herein. Other than as required by law, TAAL undertakes no obligation to update any forward-looking information to reflect new information, subsequent or otherwise.

SOURCE Taal Distributed Information Technologies Inc.

Cision

View original content to get multimedia: http://www.newswire.ca/en/releases/archive/July2022/13/c2333.html

Tue, 12 Jul 2022 23:00:00 -0500 en-CA text/html https://ca.finance.yahoo.com/news/taal-completes-agreement-bring-100-110000891.html
Killexams : Missile Test Ends in Explosion Seconds After Launch from Vandenberg Space Force Base

An odd rumble and an orange glow against a night sky apparently signaled a missile test flop that sparked a fire at Vandenberg Space Force Base late Wednesday.

Despite a news release saying the Minotaur II+ test would take place Thursday morning from the northern section of the base, the launch occurred the night before, at 11:01 p.m.

More than an hour after liftoff, Vandenberg officials confirmed the booster had exploded approximately 11 seconds after launching from Test Pad 01.

There were no injuries in the explosion and the debris was contained to the immediate vicinity of the launch pad, Vandenberg officials said in a statement released early Thursday.

“We always have emergency response teams on standby prior to every launch,” said Space Force Col. Kris Barcomb, vice commander of Space Launch Delta 30 and the launch decision authority for the Minotaur launch.

“Safety is our priority at all times.”

An investigative review board has been established to determine the cause of the explosion, Vandenberg officials said.

Area residents took to social media to air their concerns after the apparent anomaly upon liftoff led to a glow and reports of a fire.

Emergency dispatchers for the California Highway Patrol and other agencies received several reports about a heavy smell of smoke from residents in the Lompoc and Santa Maria valleys, but crews in the field said they suspected it was from Vandenberg.

The post-launch news release did not mention any fire, but Vandenberg officials sent a second release two hours after the launch confirming that the Vandenberg Fire Department had responded to a blaze linked to the launch.

“The fire is producing smoke but not immediate danger to the rest of base,” the statement said.

Off-base fire agencies were dispatched to the base fire at approximately 1:30 a.m. 

The Minotaur Fire reportedly blackened at least 150 acres. Vandenberg officials did not provide information about whether the fire had been contained or other updates.

Santa Barbara County firefighters and equipment were released at approximately 8 a.m. Thursday.

Reached by phone, Vandenberg Public Affairs Office Chief Robin Ghormley told Noozhawk she could not explain why the launch occurred Wednesday night when an announcement her office sent out eight hours earlier said it would occur Thursday morning.

The tardy statement sent Wednesday afternoon also lacked any mention of the planned window.

However, the headline announced “planned for Thursday” and the first paragraph says “is scheduled for the morning of July 7 from north Vandenberg.”

It’s not clear if military officials failed to account for the one-hour time difference between California and the home of the Air Force Nuclear Weapons Center at Kirtland Air Force Base in New Mexico.

The test involved the program for the Mark21A Reentry Vehicle, or Mk21A, which rode aboard the Minotaur+ booster.

“The test launch will demonstrate preliminary design concepts and relevant payload technologies in operationally realistic environments,” according to the Air Force Nuclear Weapons Center.

The Air Force is developing a next-generation intercontinental ballistic missile under the Ground-Based Strategic Deterrent weapons system, also known as Sentinel.

As part of the that effort, the military and defense contractors have been working to modify the existing Mk21 re-entry vehicle, typically a cone-shaped device that sits inside a missile’s nosecone. Re-entry vehicles would carry a warhead for the final leg of a trip toward the target.

The program seeks to tweak the older Mk21 re-entry vehicle with the capability to deliver the W87-1 warhead for Sentinel.

That Sentinel weapon system would replace the fleet of aging Minuteman III ICBMs sitting on alert around Malmstrom Air Force Base near Great Falls, Montana; Minot Air Force Base outside Minot, North Dakota; and F.E. Warren Air Force Base in Cheyenne, Wyoming.

Wednesday’s launch occurred from Vandenberg’s most northern coastline. The launch pad is just over a hill separating the base from Casmalia, where a 1990s missile test failure led to a fire that threatened the tiny community of under 150 residents.

Noozhawk North County editor Janene Scully can be reached at .(JavaScript must be enabled to view this email address) . Follow Noozhawk on Twitter: @noozhawk, @NoozhawkNews and @NoozhawkBiz. Connect with Noozhawk on Facebook.

Wed, 06 Jul 2022 22:51:00 -0500 en text/html https://www.noozhawk.com/article/missile_test_ends_in_explosion_seconds_after_launch_from_vandenberg_sfb
Killexams : DALL-E 2: The world's seen nothing like it, but can AI spark a creative renaissance?

These Canadian artists are early adopters, and they’re blown away by what they’ve seen so far.

Bridget Moser and Ginette Lapalme are two Canadian artists who were granted access to DALL-E 2. At left, one of Moser's "collaborations" with the AI tool, and one of Lapalme's is at right. (Bridget Moser/Ginette Lapalme/DALL-E 2)

Think of anything. Seriously, anything. A turtle wearing a cowboy hat. Spider-Man blowing out birthday candles. Maybe a landscape painting of Algonquin Park, rendered in the style of Monet or Takashi Murakami. 

If you have the words to describe your vision, no matter how strange — or banal, for that matter — it's now possible to generate a picture in an instant, a file ready to get and share on your platform of choice. 

It's not magic, it's AI. And in the last few months, the world's become increasingly aware of systems that are capable of conjuring original images from a few simple keywords. The results are often startling — though not always because of their realism.

That's especially true in the case of Craiyon (a tool previously known as DALL-E Mini), which is arguably the best known system of its sort. Free to use and available to all, the public swiftly adopted this open-source image-generator earlier in the year, and it's become a meme-maker's fantasy, spawning infinite threads of jokey one-upmanship. 

Craiyon is trained on millions of images, a database it refers to when deciphering a user's text-based request. But the pictures it delivers have never existed before. Every request for a portrait of Snoop Dogg eating a cheeseburger is wholly one-of-a-kind. The pictures lack photorealistic clarity, and there's something to its grainy aesthetic, punctuated by spun-out faces, that suggests a nightmare seen through a car-wash window, but more powerful options are waiting in the wings. 

On Thursday, Meta revealed it's developing a text-to-image AI called Make-A-Scene, and earlier this year, Google revealed the existence of its own text-to-image generator (Imagen). Neither of those tools are currently available to the public, but there are other companies that have opened their projects to outside users. 

Midjourney would be one example; there's a waitlist to access its current test version, but admitted users can opt to join paid subscription tiers. Perks include friend invites and unlimited image requests.

But DALL-E 2 is the system that's probably drawn the most attention so far. No relation to Craiyon/DALL-E Mini, it was developed by OpenAI, a San Francisco-based company whose other projects include AI capable of writing copy (GPT-3) and code (Copilot). DALL-E 2 is the latest iteration of a text-to-image tool that they first revealed in January 2021. 

At that time, OpenAI's system could produce a small image (256-by-256 pixels) in response to a text prompt. By April of this year, however, the AI (DALL-E 2) was capable of delivering files with four times that resolution, while offering users the option of "inpainting," or further refining specific elements within their results: details including shadows, textures or whole objects. 

Never mind the particulars of how any of this is possible (there are other resources out there that parse the science) — the results are astounding in their detail and slick enough for a glossy magazine cover. (Just check out the latest issue of Cosmopolitan, which features "the world's first artificially intelligent magazine cover.") 

As such, DALL-E 2's developers have taken a few steps to prevent users from dabbling in evil. Deepfake potential would seem to be a concern. Want a picture of a real person? That's a no-no, though the technology is technically capable of doing it. And there are safeguards against images promoting violence, hate, pornography and other very bad things. Various tell-tale keywords have apparently been blocked; for example, a picture of a "shooting" would be a non-starter. 

Access to the tool remains limited to a group of private beta testers, and though OpenAI says they want to add 1,000 new users each week, there are reportedly more than a million requests in their waitlist queue. 

Still, even at this stage, the technology's existence is raising plenty of questions, if not as infinite in quantity as the images DALL-E 2 can produce. Are creative industries facing obsolescence? When anyone can generate a professional-quality image with a few keystrokes, how will that impact graphic design, commercial illustration, stock photography — even modelling? What if the AI's been trained to be racist or sexist? (Right now it sure seems to be.) Who owns the images that are spat out by one of these systems — the AI, the company that developed it or the human who typed "sofa made of meatball subs" and made it happen?

The future remains as murky as a Craiyon render, but for now, there's at least one question we can begin to answer. If these tools have indeed been developed as a way to "empower people to express themselves creatively" — as the creators of DALL-E 2 purport — then what's it like to actually use them? And what if your job is all about creativity? How are artists using AI right now?

CBC Arts reached out to a few Canadian artists who've been dabbling with AI systems, including DALL-E 2, whose research program currently includes more than 3,000 artists and creatives. How do they see this technology changing the way they work?

Here are some of the early thoughts they had to share.

More of Bridget Moser's DALL-E 2 output can be found on her Instagram. (Bridget Moser/DALL-E 2)

Bridget Moser: 'It's helping me realize what I want to make real'

A 2017 finalist for the Sobey Art Award, Bridget Moser is a Toronto-based artist who's renowned for her performance and video-based work. In April, she joined DALL-E 2's waitlist, and spent the next few months brainstorming what to feed it first. When reached by CBC Arts, she had been playing with it for just two weeks.

What are you doing with DALL-E 2?

At the start I was a little bit addicted, I would say. With DALL-E 2 you get 50 prompts per 24 hours. Each prompt generates six images, so you're technically getting 300 images per a 24-hour period. For the first five days, I think I just maxed it out and then had to slow down a little bit. I went too far. (laughs)

Right now I'm just saving the images. It really feels like a process of sketching or brainstorming, and I feel like I've learned something from what it's producing that will lead to something else. 

A portrait of the artist. (Bridget Moser, that is, not DALL-E 2). (Bridget Moser/DALL-E 2)

There's tons of rules you have to follow with DALL-E 2 and it will refuse certain prompts as well if it thinks it's going to violate the rules. No gore, no violence: nothing like that. And nothing shocking.

It's very interesting to me to try and work within those constraints and see what feels sort of like that, but actually isn't. 

Some of the images I'm quite in love with, but I don't know what will happen to them. Some of them are on Instagram. I've tried not to share things that are really unsettling. Even the ones I've posted, some people have been like, "This is very disturbing."

What's the first thing you made?

What was I doing at the start? I guess it was about making impossible photographs, in some respect.

One of my favourites from the very first day was probably "12 rubber gloves in the air + in the woods at night + disposable camera + flash photo." It generated this kind of ghostly looking photo, and I was just so pleased with that.

One of the early "impossible photographs" generated by Bridget Moser using DALL-E 2. (Bridget Moser/DALL-E 2)

Is there an art to generating prompts?

Yeah, totally. I think it's like a skill that you fine-tune the more you use it. You learn these little idiosyncrasies that exist that you didn't know about until you ask it to do something or change something.

Why should artists have access to AI tools like DALL-E 2?

I wish more artists had access to it because I think it's going to be really important for a lot of people, and it would be really disappointing if the only people who are able to use it are the Bored Ape NFT guys or something. 

I can also imagine a lot of people on 4chan are salivating at the prospect of using something like this, so I think it's important that artists have early access to try and hopefully mitigate some of the more problematic aspects of technology like this. It's inevitably problematic. There are tons of inherent biases when you're training an AI just because humans are inherently biased and we're the ones training it. I would hope the way artists use it would be not-evil, but I certainly know of evil artists, so there's no certain there either.

One of Bridget Moser's DALL-E 2 experiments: "Confusion about Crocs™" (Bridget Moser/DALL-E 2)

How is AI going to change what you do?

I'm still not totally sure, but it makes me want to experiment more materially, which is something that I feel lost doing a lot of the time. 

I feel like I'm pretty good at performing and I'm pretty good at making videos. I have things I want to do sculpturally, but just can't figure out on my own. And I feel like in seeing hundreds and hundreds of DALL-E 2 variations, it's helping me realize what I want to make real.

Part of me also feels like this technology is inevitable. It's coming for us no matter what. - Bridget Moser, artist

I go back and forth a little bit. In some ways, these images feel kind of complete on their own; maybe they don't need to be made into any kind of physical iteration.

Part of me also feels like this technology is inevitable. It's coming for us no matter what. And so there is something kind of reassuring about being able to use it in a way that feels generative and creative. It's going to create new possibilities instead of the sense of doom that I think sometimes comes with this kind of technology. 

Winston Hacking wound up with this "cartoon flower" after trying Midjourney for the first time. (Winston Hacking/Midjourney)

Winston Hacking: 'I wouldn't even claim it as my own artwork'

A filmmaker and animator, Winston Hacking's signature collage-based style can be seen in music videos for Flying Lotus, Andy Shauf and BadBadNotGood. (A clip for the latter was a latest honouree at the 2022 Prism Music Prize this month.) Previously based in Toronto, Hacking now lives in Portland, Ore., where he's been experimenting with Midjourney, feeding it 10 prompts per day and regularly sharing the faux-vintage results on Instagram.

What are you doing with Midjourney?

When I first saw what was happening, I was curious. Making collage artwork, the first thing I started thinking about was like, oh — I can create my own vintage magazines to source from. 

I go through archives — I go through Flickr, Creative Commons, public domain photos. And sometimes it's really hard to find something specific that you're looking for. Sometimes you don't know what you're looking for.

So that's the first thing that I thought about: what if I don't think about one of these generated images as a finished artwork — like, it's just asset, it's just an element, like a piece of cut-out paper? That's kind of where I'm coming from as an artist, potentially using it for an animated project.

Can I create my own magazines and what would they look like — and how does that influence decision-making in collage?

"Plant person" generated with Midjourney by Winston Hacking. (Winston Hacking/Midjourney)

First impressions?

It's totally captivating. I would almost relate it to playing Nintendo for the first time when I was a kid — that kind of quality of, "Wow, I've never seen anything like this before."

I'm not sold on it as this game-changing thing that I'm going to embrace. Right now, I'm just kind of playing with it.

I think what we're really looking at is a whole new way of communicating — visual communication. I'm not so freaked out by it. I see a lot of potential for it to help people communicate ideas.

Winston Hacking has been sharing Midjourney results like this on his Instagram. (Winston Hacking/Midjourney)

Is there an art to generating prompts?

It's strange because I wouldn't even claim it as my own artwork, you know what I mean? I don't really claim ownership of the images that are being generated. I mean, I entered in prompts, but I don't know if that makes it my work or not.

I'm just describing something; it's not like I saw that image in my head.

As artists, that's what we do, right? We take something ... and we try to break it. - Winston Hacking, artist

I just know that certain things combined are beautiful — things I find beautiful in images, like textures and anomalies, aberrations.

What types of images do you like and why do you like those images? If you can answer that, then you might find something that impresses you.

"Balloon Animal" generated with Midjourney. (Winston Hacking/Midjourney)

What can artists get out of using these AI systems?

I think it's definitely important to at least embrace it and actually see what can be done.  As artists, that's what we do, right? We take something — we take this new medium or this new technology — and we try to break it. (laughs)

How is AI going to change what you do?

I really haven't made a decision on if I would integrate it into a project or not. I definitely know that there's certain times where I'm looking for something really specific and I can't find it, and I think that that's where it is a great tool. It can fill a gap; it can fill in a missing piece. I definitely see it as a tool.

An example of the "unreal bubblegum jewelry" that Ginette Lapalme has been posting on Instagram since getting access to DALL-E 2. (Ginette Lapalme/DALL-E 2)

Ginette Lapalme: 'A lot of it feels like it's straight out of my brain'

Based in Toronto, where she runs Toutoune Gallery on Bathurst Street, illustrator Ginette Lapalme has been dabbling with DALL-E 2 since early June. "Initially, I wasn't really sure what I could get out of it," she says. Now, her Instagram is full of blobby digital renders that bear an eerie resemblance to her IRL work.

What are you doing with DALL-E 2?

I'm inspired by finding strange objects — so, old tchotchkes, you know? Bootleg images or novelty toys. I was initially seeing if DALL-E 2 could create these things out of whole cloth. I was trying to see whether the machine could create these things that I get a lot of joy finding online or in real life.

I'm just kind of playing around with building different forms, feeding it a lot of different words that kind of explain the aesthetic I'm usually playing with — so different colours, different materials, different shapes.

It's kind of mend-bending because a lot of it feels like it's straight out of my brain, or like it's already fitting in with the work that I make. It's awe-inspiring. It's very bizarre.

More DALL-E 2 experiments from Ginette Lapalme. (Ginette Lapalme/DALL-E 2)

First impressions?

I got access on June 7th or 8th, so I've been diving pretty deep on this thing every day. 

A comedian and artist who I really like, Alan Resnick, posted about having access to it in early June. I was just really floored by what he was doing with it. I joined the waitlist then, and I think maybe he was able to recommend me.

He's done some really cool stuff with it in terms of video art. It's interesting to see different artists have access to these things. You see what people make with DALL-E Mini, and to me it's all kind of a little bit boring — like Mad Libs. But seeing artists I like have access to it? It's mind-blowing the way that people figure out how to use it to their benefit.

"Bubble gum figurine" generated by Ginette Lapalme using DALL-E 2. (Ginette Lapalme/DALL-E 2)

Why should artists have access to AI tools like DALL-E 2?

When I first heard about this thing, I was super wary. Having this machine, this AI, be able to fake a specific artist's style is off-putting. But it's a tool, and I think artists can benefit from it. It's an amazing tool that can kind of create images from your dreams. And I think everybody who has access to it who's an artist already has a specific style or way of thinking. They can do a lot of creative stuff with it.

How is AI going to change what you do?

I took a long break from making work with resin and plastics the last few years, and this machine is actually making me want to dive back into craft. I'm very inspired by it. 

It's a really funny thing to think about because this is an AI feeding me digital images. It's making unreal objects that look very textured, like they could be real objects. I'm trying to find a way to make them become real for me. 

Clint Enns. "Workers Leaving the Mine." Generated with Disco Diffusion. (Clint Enns/Disco Diffusion)

Clint Enns: 'It feels like I could actually make the type of images I want to make'

While he waits for DALL-E 2 access, Clint Enns is busy investigating whatever AI systems are available to him, but two free and open-source tools have been his go-to options so far: Craiyon and Disco Diffusion. Originally from Winnipeg, Enns is the artist behind Internet Vernacular, an ongoing found-photography project that traces the evolution of visual communication throughout the digital age. Its last public exhibition focused on the year 2004, and the birth of the shareable "social photo." It's tempting to think 2022, the dawn of DALL-E 2, will feature in a future chapter of the series.

First impressions?

Everybody's doing it right now, which is kind of fun. And I think it's really captured the imagination of a lot of artists.

Although machine learning has been around for a little while, it feels like the results were simply in the realm of science fiction or fantasy art. Kind of cheesy. But it feels like something's broke. It feels like there's real potential in the technology. It feels like I could actually make the type of images I want to make.

The images look flawed. I think that's what was magical about them for me — that they look glitchy and broken down. Like DALL-E Mini: all of the faces are just melting, right?

How are you using AI?

I'm still using DALL-E Mini. It's really informative to see what other artists are feeding into the machine as prompts. You can really learn about an artist's practice just by seeing what their prompts are, similar to the way that the machine is sort of learning from us. Like, the prompts they are using usually reflect what they are trying to make without the use of the computer.

I was trying to see how the machine would interpret my prompts, in particular where it failed. I like to provide it with what I consider impossible tasks for things that I thought would be funny, or things that are self-referential. Like, "an AI-generated face." You put in that prompt and see what it thinks an AI-generated face looks like.

Here's what happened when Clint Enns requested "AI generated faces" from Craiyon/DALL-E Mini. (Clint Enns/Craiyon)

When I start to understand this technology a little better, I'm hoping to put out a chapbook where I think through the images. Like, thinking about what art is in an era where this machine is making art. I already have a line from it: something like, "I'm just waiting for the machine to become sentient enough to sue artists using its results for copyright infringement." 

Why should artists have access to AI tools?

Well, somebody like me, what I'm trying to do is explore where technologies break down or fail. I've been doing this throughout my practice — like making glitch art. I think artists are really good at finding those failures and exploiting them.

You can generate perfect landscapes with this technology, but you can only see so many of those perfect landscapes. That's where these technologies start to break down and open up. I really think that's the artist's job in all of this. 

How is AI going to change what you do?

I think this technology raises a lot of challenges to the artist. Can a machine do it better than us? But whenever a new technology comes up, it always poses a challenge to the artist. Think about photography, right? Why paint when you can just take a photo of something? Artists always find a way of both using the technology in innovative ways and responding to it.

It feels really exciting, like the birth of a new type of computing, you know? It's like when I was a kid and I got my first computer and I could do things that I couldn't do before. It feels like this tool is going to allow that. It has lots of potential.

These conversations have been edited and condensed.

Fri, 15 Jul 2022 05:15:00 -0500 en text/html https://www.cbc.ca/arts/dall-e-2-the-world-s-seen-nothing-like-it-but-can-ai-spark-a-creative-renaissance-1.6521710
Killexams : Certificate in Digital Business

This hands-on course examines the technologies and infrastructure required to support digital innovation.  The course examines the major components of the information technology infrastructure, such as networks, databases and data warehouses, electronic payment, security, and human-computer interfaces.  The course covers key web concepts and skills for designing, creating and maintaining websites, such as Grid Theory, HTML5, CSS, JavaScript, AJAX theory, PHP, SQL and NoSQL databases.  Other principles such as Web Accessibility, Usability and User eXperience, as well as best security practices, are explored in detail through a combination of lectures, in-class examples, individual lab work and assignments, and a final group project.

Fri, 31 Jul 2020 22:15:00 -0500 en text/html https://www.dal.ca/academics/programs/graduate/digital-innovation/program-details/digital-business.html
Killexams : Is Data Scientist Still the Sexiest Job of the 21st Century?

Ten years ago, the authors posited that being a data scientist was the “sexiest job of the 21st century.” A decade later, does the claim stand up? The job has grown in popularity and is generally well-paid, and the field is projected to experience more growth than almost any other by 2029. But the job has changed, in both large and small ways. It’s become better institutionalized, the scope of the job has been redefined, the technology it relies on has made huge strides, and the importance of non-technical expertise, such as ethics and change management, has grown. How it operates in companies — and how executives need to think about managing data science efforts — has changed, too, as businesses now need to create and oversee diverse data science teams rather than searching for data scientist unicorns. Finally, companies need to think about what comes next, and how they can begin to think about democratizing data science.

Ten years ago we published the article “Data Scientist: Sexiest Job of the 21st Century.” Most casual readers probably remember only the “sexiest” modifier — a comment on their demand in the marketplace. The role was relatively new at the time, but as more companies attempted to make sense of big data, they realized they needed people who could combine programming, analytics, and experimentation skills. At the time, that demand was largely restricted to the San Francisco Bay Area and a few other coastal cities. Startups and tech firms in those areas seemed to want all the data scientists they could hire. We felt that the need would expand as mainstream companies embraced both business analytics and new forms and volumes of data.

At the time, we defined the data scientist as “a high-ranking professional with the training and curiosity to make discoveries in the world of big data.” Companies were beginning to analyze voluminous and less-structured data like online clickstreams, social media, and images and speech. Because there wasn’t yet a well-defined career path for people who could program with and analyze such data, data scientists had diverse educational backgrounds. The most common qualification in our informal survey of 35 data scientists at the time was a PhD in experimental physics, but we also found astronomers, psychologists, and meteorologists. Most had PhDs in some scientific field, were exceptional at math, and knew how to code. Given the absence of tools and processes at the time to perform their roles, they were also good at experimentation and invention. It’s not that a science PhD was really required to do the work, but rather that these individuals had the rare ability to unlock the potential of data, wading through complex, messy data sets and building recommendation algorithms.

A decade later, the job is more in demand than ever with employers and recruiters. AI is  increasingly popular in business, and companies of all sizes and locations feel they need data scientists to develop AI models. By 2019, postings for data scientists on Indeed had risen by 256%, and the U.S. Bureau of Labor Statistics, predicts data science will see more growth than almost any other field between now and 2029. The sought-after job is generally paid quite well; the median salary for an experienced data scientist in California is approaching $200,000.

Many of the same headaches remain, too. In our research for the original article, many data scientists noted that they spend much of their time cleaning and wrangling data, and that is still the case despite a few advances in using AI itself for data management improvements. In addition, many organizations don’t have data-driven cultures and don’t take advantage of the insights provided by data scientists. Being hired and paid well doesn’t mean that data scientists will be able to make a difference for their employers. As a result, many are frustrated, leading to high turnover.

Even so, the job has changed — in both large and small ways. It’s become better institutionalized, its scope has been redefined, the technology it relies on has made huge strides, and the importance of non-technical expertise, such as ethics and change management, has grown. The many executives who recognize that data science is important to their businesses now need to create and oversee diverse data science teams rather than searching for data scientist unicorns. They can also begin to think about democratizing data science — still with the aid of data scientists, however.

Better Institutionalized

In 2012, data science was a nascent function even in AI-oriented startups. Today it is quite well-established, at least in firms with a major commitment to data and AI. Banks, insurance companies, retailers, and even health care providers, and even government agencies have substantial data science groups; large financial services firms may have hundreds of data scientists. Data science has also been effective in addressing societal crises, counting and predicting Covid-19 cases and deaths, helping to address weather disasters, and even fighting misinformation and cyber hacks related to the Ukraine invasion.

One important factor facilitating institutionalization has been the rise of data science-oriented educational offerings. In 2012, there were effectively no degree programs in data science; data scientists were recruited from other quantitatively-oriented fields. Now there are hundreds of degree programs in data science or the related fields of analytics and AI. Most are masters degree programs, but there are also undergraduate majors and PhD programs in data science. There are also enormous numbers of certificates, online course offerings, and boot camps in data science-related fields. There are even high school data science courses and curricula. It’s clear that anyone desiring to be trained in data science capabilities will have plenty of options for doing so. However, it’s unlikely that any single program can inculcate all of the skills necessary to conceive, build, and deploy effective and ethical data science analysis, experiments, and models. Indeed, making sense of the diverse educational choices even at a single institution is a challenge for prospective data scientists and for the companies that wish to employ them.

Data Scientists in Relation to Other Roles

The data science role is also now supplemented with a variety of other jobs. The assumption in 2012 was that data scientists could do all required tasks in a data science application — from conceptualizing the use case, to interfacing with business and technology stakeholders, to developing the algorithm and deploying it into production. Now, however, there has been a proliferation of related jobs to handle many of those tasks, including machine learning engineer, data engineer, AI specialist, analytics and AI translators, and data oriented product managers. LinkedIn reported some of these jobs as being more popular than data scientists in its “Jobs on the Rise” reports for 2021 and 2022 for the U.S.

Part of the proliferation is due to the fact that no single job incumbent can possess all the skills needed to successfully deploy a complex AI or analytics system. There is an increasing recognition that many algorithms are never deployed, which has led many organizations to try to Strengthen deployment rates. Additionally, the challenges of managing increased data systems and technologies have resulted in a more complex technical environment. There have been some attempts at certification of data scientists and related jobs, but these are not yet widely sought or recognized. Some companies, like TD Bank, have developed classification structures for the many data science-related careers and skills, but these are not common enough in organizations.

As a result of this proliferation of skills, companies need to identify all of the different roles required to effectively deploy data science models in their businesses, and ensure that they are present and collaborating on teams.

Changes in Technology

One reason why the data scientist job keeps changing is because the technologies data scientists use are changing. Some technology trends are continuations of directions present in 2012, such as the use of open source tools and the move to cloud-based processing and data storage. But some affect the core of data science work. For example, some aspects of data science are increasingly automated (using automated machine learning or AutoML), which can both Strengthen the productivity of data science professionals and open up the possibility of “citizen data scientists” with only some quantitative training. These automated tools haven’t dimmed the appeal of professional data scientists yet, but they may in the future.

Companies should begin to democratize advanced analytics and AI within their organizations, relying on data scientists to ensure that citizen-developed models are accurate and that all relevant data is employed.

Data scientists have realized that their models can “drift” in turbulent business environments like the Covid-19 pandemic, so there is a new emphasis on monitoring their accuracy after deployment. Machine learning operations, or “MLOps,” tools provide ongoing monitoring of models; automated retraining of drifted models is just beginning to be employed. Some AutoML and MLOps tools even test for algorithmic bias.

These developments mean that coding, which was perhaps the single most common job requirement when we wrote a decade ago, is somewhat less essential in data science. It has migrated to other jobs or is being increasingly automated. (Data cleaning is a notable exception to this trend, however.) The key focus of the job continues to shift towards predictive modeling and the ability to translate business issues and requirements into models. These are collaborative activities, but unfortunately there are as yet no great tools for structuring and supporting collaborative data science activities.

The Ethics of Data Science

A major change in data science over the past decade is that the need for an ethical dimension to the field is now widely acknowledged, though the Topic was rarely mentioned in 2012. The turning point for data science ethics was probably the 2016 U.S. presidential election, in which data scientists in social media (Cambridge Analytica and Facebook in particular) attempted to influence voters and further polarized electoral politics. Since that time, considerable attention has been devoted to issues of algorithmic bias, transparency, and responsible use of analytics and AI.

Some companies have already established responsible AI groups and processes. A key function of them is to educate data scientists about the issues involved in ethical AI. And there is an increased regulation that is being instituted in response to ethical lapses.

. . .

We have seen both continuity and change in the data science role. It has been remarkably successful in many ways, and some of its challenges — proliferation of related roles, the need for an ethical perspective — result in part from the widespread adoption of data science. The amount of data, analytics, and AI in business and society seem unlikely to decline, so the job of data scientist will only continue to grow in its importance in the business landscape.

However, it will also continue to change. We expect to see continued differentiation of responsibilities and roles that all once fell under the data scientist category. Companies will need detailed skill classification and certification processes for these diverse jobs, and must ensure that all of the needed roles are present on large-scale data science projects. Professional data scientists themselves will focus on algorithmic innovation, but will also need to be responsible for ensuring that amateurs don’t get in over their heads. Most importantly, data scientists must contribute towards appropriate collection of data, responsible analysis, fully-deployed models, and successful business outcomes.

Editor’s note: This post has been updated.

Fri, 15 Jul 2022 07:19:00 -0500 text/html https://hbr.org/2022/07/is-data-scientist-still-the-sexiest-job-of-the-21st-century
Killexams : Advantest to Participate in SEMICON West 2022 at the Moscone Center in San Francisco on July 12-14

Press release content from Globe Newswire. The AP news staff was not involved in its creation.

TOKYO, July 06, 2022 (GLOBE NEWSWIRE) -- Leading semiconductor test equipment supplier Advantest Corporation (TSE: 6857) will showcase its wide spectrum of semiconductor test technologies at the SEMICON West trade show on July 12-14.

The theme of Advantest’s exhibit will be “Beyond the Technology Horizon,” reflecting the company’s focus on a portfolio of advanced technologies that accelerate the digital transformation. Highlighted within the booth will be latest advancements in developing leading-edge test solutions for applications including AI, High Performance Computing (HPC), 5G communications and Advanced Driver-assistance Systems (ADAS).

Exhibition
In booth #929 located in South Hall, Advantest will present its broad range of semiconductor test solutions and services, each designed to deliver high value to customers in the rapidly changing semiconductor ecosystem. Among the exhibits will be:

  • NEW: V93000 EXA Scale EX Test System, a compact test station enabling 4X capacity increase in IC engineering labs
  • T2000 ISS IP Engine 4, an image-processing engine for high-resolution and high-speed image processing
  • ACS  open ecosystem enabling streaming data access and real-time analytics with integrated test software and hardware monitoring and control to Strengthen semiconductor device yield, quality and capacity
  • MPT3000  test system for evaluating all solid-state drives
  • T5835 all-in-one high-speed memory test solution and T5221 NAND/NVM multi wafer-test solution, both software compatible with the industry-standard T583x test platform
  • Remotely operable test handlers, enabling device and data handling from engineering labs to production test floors
  • Software solutions and services including Adaptive Probe Card cleaning and Smart Test Cell Management.

Technical Presentations
In addition to its exhibits, Advantest will sponsor and actively participate in the Test Vision Symposium on July 13-14 during SEMICON West.

During the Symposium, on July 13 in Session 1, Adrian Kwan, Advantest America’s senior business development manager, will discuss “802.11BE WiFi-7 4096 QAM 320 MHz ATE Test Methodology,” while Roger Nettles, system/application engineer, Advantest America, will present “5G Millimeter Wave Over the Air Production Testing on ATE.” In addition, Don Thompson, director of engineering, R&D Altanova, Advantest Group, will discuss “Solving Socket Power Integrity; The Last Link in the Chain,” in Session 3 on July 14. Thompson will also be presenting “Optimizing PCBs for 5G Load Boards” during the poster session on July 13.

Sponsorships
Advantest will sponsor and participate in the Workforce Development (WFD) Pavilion as well. On July 13, Michael Engelhardt, RF SE director, Advantest America, will talk about “A Day in the Life of a Semiconductor Professional,” while Pauline Nguyen, senior staffing consultant at Advantest America, will talk to the group about resume building and interviewing skills.

Social Media
For the latest updates, visit the Advantest LinkedIn and Facebook pages or follow @Advantest_ATE on Twitter for live tweets during events.

About Advantest Corporation
Advantest (TSE: 6857) is the leading manufacturer of automatic test and measurement equipment used in the design and production of semiconductors for applications including 5G communications, the Internet of Things (IoT), autonomous vehicles, high performance computing (HPC) including artificial intelligence (AI) and machine learning, and more. Its leading-edge systems and products are integrated into the most advanced semiconductor production lines in the world. The company also conducts R&D to address emerging testing challenges and applications; develops advanced test-interface solutions for wafer sort and final test; produces scanning electron microscopes essential to photomask manufacturing; and offers system-level test solutions and other test-related accessories. Founded in Tokyo in 1954, Advantest is a global company with facilities around the world and an international commitment to sustainable practices and social responsibility. More information is available at www.advantest.com.

ADVANTEST CORPORATION
3061 Zanker Road
San Jose, CA 95134, USA
Cassandra Koenig
cassandra.koenig@advantest.com  

Tue, 05 Jul 2022 19:07:00 -0500 en text/html https://apnews.com/press-release/globe-newswire/technology-san-francisco-advantest-corp-1763e8e9815c26c036bfc36bebd983b9
Killexams : Virtual reality can support and enhance outdoor environmental education

The use of virtual reality (VR) and augmented reality (AR) for environmental education is controversial. Some are concerned that these technologies might replace or disrupt outdoor experiences that can connect students to nature and develop pro-environmental behaviours.

However, learning through technology and being outdoors aren’t mutually exclusive. When VR and AR are used effectively they can support and enhance environmental education while contributing to students’ positive well-being.

Access and connection to nature

Many nature locations are inaccessible to students due to distance, safety concerns, economic barriers or ability.

Access to ecologically sensitive areas like coral reefs or wetlands is limited in order to preserve them. VR can provide an alternative way to experience these locations.

Virtual technologies can also promote outdoor trips close to home and help students connect with global and local environmental issues. For example, research by virtual reality design expert Ana-Despina Tudor, with colleagues, used a 360-degree field trip of the Borneo rainforest to teach students about deforestation. Lessons were then applied to a local nature reserve being affected by railroad construction. Students worked with a local charity to help protect it.

Multiple points of view

Such research holds promise for those seeking to extend the connection between a sense of place and pro-environmental behaviour to regional, continental and global scales.

That means adopting eco-friendly attitudes that can minimize adverse effects on the natural environment wherever these effects occur.

“Wicked” or complex environmental problems require students to engage with multiple places and points of view. Improved access through virtual simulations may promote empathy and overcome inaction brought on by the psychological distance that students might feel towards nature hit hardest by climate change.


Read more: From the Amazon, Indigenous Peoples offer new compass to navigate climate change


Making the invisible visible

VR and AR lose much of their potential when they are only used to simulate outdoor environments. Instead, these technologies become transformative when students can experience environmental processes that would otherwise be invisible to them due to their scale or the timeframes over which changes occur.

Consider a virtual reality simulation known as the Stanford Ocean Acidification Experience. During this simulation, students experience the effects of a century’s worth of ocean acidification on reef biodiversity by moving “amid coral as it loses its vitality” and observing how increasingly acidic water affects marine life.

Jeremy Bailenson, professor of communication at Stanford University, discusses the Stanford Ocean Acidification Experience.

When researchers measured the effect of this simulation by comparing student test scores, they found that knowledge of ocean acidification increased by almost 150 per cent and was retained after several weeks.

Combining information sources

AR can be effective at combining different multimedia and information sources about environmental processes. Harvard researchers developed the AR tool EcoMOBILE to help middle-school students monitor water quality.

Students can play an augmented reality game designed to to engage students in learning about water ecosystems on a smartphone while being outdoors monitoring water.

EcoMOBILE demo video.

The program resulted in high levels of engagement, and significant gains to understanding and problem-solving.

Critical environmental education

Compared to traditional modes of outdoor education, VR and AR can provide opportunities to include diverse knowledges.

Practitioners of critical approaches to environmental education may take this opportunity to engage with stories produced by marginalized communities about their experiences of nature and climate change.

Teachers can then engage students in self-reflection while highlighting broader issues surrounding social and environmental justice.

Engaging Indigenous knowledges

Camosun Bog 360 is a virtual tour of a local wetland in Vancouver, and is one example of this approach.

Community interviews with volunteers who are engaged in bog restoration, and videos produced by the Musqueam First Nation are embedded and linked throughout the field trip. This content is also available to students in-person using QR codes and their smartphones.

One of the authors of this story, Micheal, developed related resources in partnership with the Pacific Spirit Park Society and Camosun Bog Restoration Group to use in educational settings.

Musqueam community member Louise Point talks about plants in a video that is embedded in the Camosun Bog 360 virtual tour.

The goal of the field trip is to introduce students to creatures and plants, help them reflect on colonial histories of Camosun Bog, and encourage them to protect the bog through volunteerism.

However, care must be taken. As Métis/otipemisiw anthropologist Zoe Todd explains, Indigenous knowledges are too often filtered through white intermediaries. At stake is that Indigenous voices can be lost or distorted. It is vitally important that Indigenous people tell their own stories.

In the case of Camosun Bog 360, the Musqueam Teaching Kit provides guidance to the researcher. This kit, developed by the Musqueam First Nation, encourages students and teachers to learn about their culture, language and histories. It provides links, videos and other teaching materials for sharing with students.

Building environmental stewards

Those who are skeptical of whether VR and AR can support in-person outdoor education should consider the important role these technologies play in equipping students to navigate challenges today.

Indeed, skills like digital literacy, creative thinking, communication, collaboration and problem solving are more essential than ever as students transition to the professional world.

VR and AR can enable students to participate in solving complex environmental problems, present and future. A drawback is the rapid advancements in hardware, software and implementation: Schools can already be slow at implementing new technologies, due to both the time it takes to train instructors as well as economic and administrative barriers, and assessing how long an investment may seem worthwhile may be a consideration.


Read more: Investing in technologies for student learning: 4 principles school boards and parents should consider


The environmental stewards of tomorrow will need to adapt to the new tools researchers and professionals are using to understand, address and communicate wicked environmental problems. Without appropriate training and practice using these technologies, students could be put at a disadvantage as they enter higher education and the workforce.

Educators have a role in empowering students as stewards, such as finding new ways to include emerging technologies in environmental education.

Mon, 11 Jul 2022 03:33:00 -0500 en text/html https://theconversation.com/virtual-reality-can-support-and-enhance-outdoor-environmental-education-183579
CAT-020 exam dump and training guide direct download
Training Exams List