Do not miss HP2-H78 Exam Questions with practice questions. Get from killexams.com

IT pros have created killexams.com HP Certification cheat sheet. Many students have complained that there are too many questions in many Implementing HP Access Control 2022 dumps questions and Dumps and that they are simply too exhausted to take any more. Seeing killexams.com specialists create this comprehensive version of HP2-H78 Exam Questions while still ensuring that every knowledge is covered after extensive study and analysis is a sight to behold. Everything is designed to make the certification process easier for candidates.

Exam Code: HP2-H78 Practice test 2022 by Killexams.com team
Implementing HP Access Control 2022
HP Implementing test
Killexams : HP Implementing test - BingNews https://killexams.com/pass4sure/exam-detail/HP2-H78 Search results Killexams : HP Implementing test - BingNews https://killexams.com/pass4sure/exam-detail/HP2-H78 https://killexams.com/exam_list/HP Killexams : Testing in a Continuous Delivery world

Software development went and got itself in a big damn hurry. Agile practices combined with continuous integration and delivery practices have dramatically sped up the development life cycle, reducing project sprints from months to weeks to even days.

Testing, on the other hand, is not quick. Manual testing is a time-consuming and labor-intensive process to ensure a piece of software does what it’s supposed to, no matter how fast it was developed. The challenge for both developers and testers in a landscape increasingly dominated by agile is figuring out how to bring testing in line with the pace of development and delivery without sacrificing quality.

“When that bottleneck around deployment was taken away and all of a sudden code could truly flow naturally and smoothly into production, there was a step back in terms of quality,” said Matt Johnston, chief marketing and strategy officer of Applause (formerly uTest). “[Customers] would come to us and say we want to test one build per week. Then all of a sudden between the shift to agile and continuous integration, suddenly they had 10 builds a week. Dev and DevOps could move faster and QA wasn’t able to keep up, so organizations just decided to launch and see how it goes.”

In this shift to continuously integrated and delivered software, in which developers integrate code into a shared repository several times a day (enabling a change or version to be safely deployed at any time), testing is shifting toward a continuous model as well. Through a combination of test automation and the merging of development and testing processes under a Dev/Test philosophy, testing providers and QA teams are beginning to implement practices to test software builds as rapidly as they’re being churned out.

“In today’s world, it’s just too slow to be in this sequential process of dev, test, deploy and manage,” said Tom Lounibos, CEO of SOASTA. “Developers have always tested; they’ve always done unit testing, application development testing, but then they throw it over the wall to the QA guys who would do functional, load and other testing. Testing is now starting to be done by developers far more frequently. QA professionals are still very much there, but they’re trying to automate the process as well. That’s the big shift around continuous integration and continuous testing, so that speed can be improved without rushing software out the door.”

Automating the conveyor belt
Continuous testing doesn’t happen without automation. In traditional manual testing, a developer checks in a change and it goes through a build process that may take hours or days to produce feedback. Automation accelerates the cycle, checking the code and providing feedback in a matter of minutes. By no means does this signal an end to manual testing—which remains essential in exploratory and regression testing at the Web, UI and mobile level—but automation frameworks and tools are proliferating throughout the process.

“If there’s one overarching principle, it’s to automate everything,” said Steve Brodie, CEO of Electric Cloud. “That means you have to automate all the types of testing you’re doing. The key is orchestrating the pipeline: It’s one thing to have these silos of automation doing automated load or regression testing, automating builds or even deployments. But what you need to do is automate that whole end-to-end pipeline, from the time the developer checks the code, all the way through to production and deployment.”

Automation requires much effort on the part of the organization to do correctly. Machines need to be provisioned and configured, manually or virtually, and testing environments need to be spun up and deployed for each application. Yet Brodie believes a misconception exists around automated testing and the quality of those tests.

“You’re only as fast as the slowest element, the slowest phase in your software delivery pipeline,” he said. “You’ve got to automate the QA environment and the systems integration testing environment. You’ve got to deploy into the performance-testing environment. If you have user acceptance tests, you’ve got to deploy it there too. But a lot of people think that by deploying faster with Continuous Delivery, quality will suffer. But what’s fascinating is that the inverse is often true, particularly if people are releasing more quickly because the batch size is changing. The magnitude of changes you’re deploying is much smaller.”

Once deployed, an automation solution is not without its kinks. Automated tools are still relatively new, and the process can result in inconsistent reporting, false positives and botched execution. Hung Nguyen, CEO of LogiGear, said the idea is there, but the biggest challenge to automation is smoothing out the release process.

“Think of your entire development cycle as an automated assembly line,” he said. “Once you turn on the conveyor belt, you don’t have to worry about what’s going to come out the other side. But when you get to the system level of testing, thousands and thousands of test cases are running against these virtual machines, and it tends to have some timing problems. Open-source and commercial tools used in combination are just not robust enough yet, so you end up with a lot of so-called false positives and end up debugging.”

The rise of Dev/Testers
The ripple effects of Continuous Delivery are changing the way developers and testers work together, blurring the lines between the two roles and skill sets. As a consequence, developers are learning how to test, and testers are becoming entrenched in the development process. It’s the manifestation of a Dev/Test philosophy.

“Testers are moving in closer to the development side, embedded into teams with developers,” said Tim Hinds, product marketing manager at Neotys. “You see this a lot in Scrum and other agile development teams. The testers are well informed about what PaaS developers are working on proactively to design their test scripts accordingly. So that whenever the code has been written and needs to be tested, they’re familiar with what’s occurring and not just getting something thrown over the wall to them.”

Dev/Testers are upending the way organizations approach testing while adopting agile and Continuous Delivery practices. Organizations that in the past have invested in independent centers of excellence for testing best practices are transitioning to have testing resources sitting alongside development resources, and as a result, the role and skill sets of testers need to evolve to fill that Dev/Tester role of ensuring code quality within an agile team.

“There’s a lot more of a need for testers to understand the application architecture, understand the APIs,” said Kelly Emo, director of applications product marketing HP Software. “If you’re doing API testing, you’ve got to understand that API and that programming model, the underlying architecture. You may need to understand its interdependency with other components of that composite app.

“There’s this new hierarchy of testing, where you have testers sitting alongside developers doing more API testing or functional testing at the application level. Downstream you’re still going to have testers managing the regression sets or doing exploratory testing. They can be more of what people think of traditionally as a black box or manual tester. Those rules still exist, but now you have both.”#!A tester’s skepticism
In the shuffle of bringing testing up to speed with development, there is a danger of losing sight of what testing was originally intended to do. Magdy Hanna, CEO of Rommana Software and chairman of the International Institute for Software Testing, implored organizations to not so easily dismiss manual testing or discount the importance of practices such as regression testing in the rush to deliver software.

“With agile, Continuous Delivery and continuous integration, I get very concerned about overlooking the value of regression testing, which guarantees that things work,” he said. “Some projects and teams thought continuous integration would be a good way to eliminate or at least minimize their regression testing, which is always a stumbling block. Sometimes, in order to push the release to production faster, we overlook or undermine the value of regression testing. I’ve seen projects that actually deliver software faster by cutting down on how much final system, acceptance and regression testing they do.

“Let me make this clear: Continuous integration will never replace regression testing—regression testing by qualified testers, not by the developers, who understand the behavior of all the features supported in the previous iteration or sprint. As a developer, I only understand the feature I wrote and implemented. Don’t expect me to do a very good job in making sure that all the other features I don’t really understand are still working.”

Hanna is also wary of relying on automation tools driving continuous testing efforts. While manual testing requires a physical tester, scripts govern automation. In this push toward a faster life cycle, he is concerned about developers and testers losing sight of a project’s ultimate goals.

“In order for Continuous Delivery and integration to succeed, they rely heavily on individuals writing scripts for tools,” he said. “The scripts need to be written not only to test the feature being implemented or the feature you are implementing, but the feature we delivered a year ago still has to work.

“There’s always trade-off. Delivering high-quality systems fast means cutting corners, and cutting corners in Continuous Delivery has affected the most critical aspects of the projects: the requirements. I can get developers to write code very fast and push code into production, but what does the code do? Why are we forgetting that we’re only writing code to implement a feature, a requirement or a behavior that the customer wanted?”

The first inning
While the growth of agile and the rise of Continuous Delivery and integration are tangible, continuous testing is still in its infancy. Organizations are still figuring out what it is, and both developers and testers are still in the process of grappling with not only how accelerated testing affects them, but also how to automate it effectively.

“From the standpoint of implementation versus awareness, we’re in the first inning of a nine-inning game,” said SOASTA’s Lounibos. “Awareness is pretty strong. It feels a little bit little 2009 and 2010 in cloud computing. Everyone was talking about it, but there weren’t that many people implementing. Early adopters are out there, but people have to get familiar with what continuous testing even means: How do they implement it? What are the best practices?”

The early adopters are the ones who, according to Applause’s Johnston, are phasing out things like centers of excellence and large outsourcing contracts—the equivalent of a large standing army—for a nimble Special Forces unit, the integrated developer and tester teams implementing automated continuous testing.

“The companies that are trotting out the same playbook of mainframe to desktop and desktop to Web applications are in the tall grass, completely lost in the weeds,” he said. “That’s what it takes: Wiping the whiteboard clean and saying ‘Okay, all the muscles we’ve built in the past 15 years from Web, a lot of those don’t really apply. The big investment we made with this vendor or that longtime outsourcing relationship or that Center of Excellence we thought we’d be using for 30 years, that’s either not going to be a part of the solution as we go forward, or just a part.’ ”

As adoption climbs, testing in a continuously delivered environment is also moving away from a development and testing process partitioned into silos. Think of the developer cliché where someone slides a pizza under the door and out comes code. As developers and testers hop the fence, testing is moving toward a more integrated and virtualized process aligned with a continuous ALM solution.

“Instead of people talking about wanting to automate tests, about hooking virtualization capabilities into a development tool, you’ll see much more of a hub that can deploy and take advantage of what happens when you put automation and virtualization together,” said HP Software’s Emo. It’ll enable automatic provisioning of virtual services you’ve discovered from your application architecture and make it available for your tester. Once the defect is found, you can automatically roll up that defect combined with a virtual service so your developer has a single environment to work with the next day.”

Automation. Virtualization. The amalgamation of developers and testers in a more fluid, concurrent software development life cycle. They’re all elements in the shift to continuous testing, which if SOASTA’s Tom Lounibos’ vision comes to fruition, may resemble something like “The Matrix.”

“Picture that concept of living in a world that’s actually a computer program, and if we’re in a meeting of 10 people, only two are real and the rest are computer generated,” he said. “That’s how we see testing in the future: a test matrix. There’ll be real people on your website or application, but there will be a constant flow of fake users anticipating problems of the real ones. Imagine virtual users trying to get ahead of real users’ genuine experiences. That’s where continuous testing is going.”#!Best practices for continuous testing
As organizations and testing providers transition from manual to continuous testing, a new set of best practices is vital in keeping testing teams on track, optimizing resources and delivering a working application at the speed of agile.

• Daily, targeted testing: Gigantic, exhaustive tests are ineffective. Daily load tests in low volumes of concurrent users can help uncover smaller scaling issues, and targeted sample testing of software on various OSes, devices, carriers and applications is more effective and cheaper than running through thousands of test cases in every single environment.

• Test in production: Rather than testing in a controlled lab setting, testing in production (while real users browse a website or application) gives the most accurate indication of how a piece of software will perform.

• Scale test volume: Break a test suite into smaller chunks of tasks running in parallel to the automated deployment. This makes the code easier to execute and debug without human intervention.

• Diagnose the root cause: A test passing, failing, or producing a critical bug report is less important than finding the root cause of the failure in the code. Testers diagnosing the root cause stop engineers and testers from wasting time and resources tracing symptoms.

• Don’t lose sight of SLAs: Putting service-level agreements on a task board or list of constraints (so that every time a test or build is run, testers know what SLAs the new application, features or functionality have to pass) will keep application quality up while maintaining development speed.

• Nightly and end-of-sprint testing: Continuously integrated builds undergo automated testing whenever a developer pushes code to a repository, but running larger tests at specific times is still valuable. During a nightly build, run a full site or application load test for whatever you expect the user base to be at any given time. Then, toward the end of an iteration or sprint, stress the application to its breaking point to set a new bar for how many concurrent users it can handle.

• Hybrid Tester/Architect: A test architect aligned with the application architect can help determine, based on the application footprint, what the next automated components and test assets should be, to better manage the overlying test framework and promote use of reusable automated assets whenever possible.

• Don’t sleep on metrics: Metrics ingrained within the automation process can create quality gates to maintain a well-defined quality state. Without measuring how automated tests are performing to make actionable improvements, testers run the risk of promoting defects faster through the testing pipeline.

• Practice, practice, practice: Virtualization is the testing equivalent of a flight simulator, allowing simulation of every possible user experience. The better understanding developers and testers have of where problems may occur, the more prepared they’ll be.

Subtle benefits and hidden obstacles
Everyone knows Continuous Delivery and testing speed up the development cycle. Everyone knows you need to automate. Neotys’ Hinds and HP Software’s Emo laid out a few of the advantages people wouldn’t immediately associate with continuous testing, and some of the more subtle challenges to doing it right.

Benefit: Avoiding late performance problems: “It’s always cheaper to make changes earlier in development than to have something deployed to production and going back to add a hotfix. It also allows people to make sure that whenever you’re releasing new features into production, you’re not allowing any sort of performance regression; not allowing old bugs to creep their way back in.” —Hinds

Benefit: Mitigating technical debt: “If you’re seeing load, performance or security issues early on, you’re likely not to let them propagate or let them get consumed in other composite applications. It also creates an interesting conversation between your tester, your developer and your product manager really pushing on those user story functions, really pushing on the requirements.” —Emo

Challenge:
Shorter development cycles: “When moving performance testing to continuous environments, testers need to adapt. You’re getting a new build way more often than you are in a more traditional waterfall environment, though you’ve got to do basically the same number of tests you were doing before, except now you’ve got them every two weeks or less.” —Hinds

Challenge: Skill set: “Making sure you have the folks with the level of understanding needed to be able to do this kind of testing, but also to engineer the process, the infrastructure. There is a special skill in being a really good tester. They need to put in place the continuous integration process connected to your test automation suite and connect it back into your ALM system so you know the results the next morning and you’re able to act on it.” —Emo

Wed, 29 Jun 2022 12:00:00 -0500 en-US text/html https://sdtimes.com/automation/testing-in-a-continuous-delivery-world/
Killexams : How HP Designers Think About Sustainable PCs

A visit to HP’s Design Studio, where the team takes creative leaps and deliberate steps in the quest for good-looking and eco-positive products.

Northampton, MA --News Direct-- HP Inc.

Stacy Wolff outside the CMF (colors, materials, fabric) library.

In a conference room at HP’s Silicon Valley campus, a cornucopia of materials is placed all around. On the table and walls are swatches in fashion-forward colors (teal green, scarlet, rose gold) and novel textures (mycelium foam, crushed seashells, recycled rubber from running tracks, fabric from recycled jeans). Even more unexpected: pairs of high-end athletic shoes, and lots of them; luggage and backpacks, teapots and totes; stacks of gorgeous coffee-table books on courses ranging from furniture to architecture — all to inspire the look and feel of devices that HP has yet to imagine.

Being able to touch, test, and debate about these items in person is part of the process, a creative collaboration Global Head of Design & Sustainability Stacy Wolff and his talented team of designers are grateful to be able to do side by side again inside their light-filled studio in Palo Alto. With each iteration of an HP laptop, desktop, or gaming rig, they endeavor to push the bounds of sustainable design while offering consumers a device that they’re proud to use each day.

For the last few years, HP’s design work has gained recognition, evidenced by the studio’s gleaming rows of awards. But there’s not a single name listed on any of them. “Everything we do is by collective effort. We win as a group, and we lose as a group,” says Wolff. “If you won an award, someone else had to do maybe a less glamorous job to provide you the freedom to do that.”

The team of 73 creatives in California, Houston, and Taipei are from backgrounds as varied as design, engineering, graphics, anthropology, poetry, ergonomics, and sports journalism. There’s one thing they have in common, though. Disagreements are dealt with by amping up their communication and doubling down on what they know to be their source of truth. “If we let the customer be the North Star, it tends to resolve almost all conflict,” Wolff says.

HP’s head of design has led a massive shift in how HP approaches design since its split from HPE in 2015, steering the company toward a more unified, yet distinct, visual identity, and a willingness to experiment with both luxury and mass-market trends. Wolff’s team is responsible for delivering the award-winning HP Spectre and ENVY lines, including the HP Spectre 13 (at the time of launch, hailed as the world’s thinnest laptop); the HP Spectre Folio (the first laptop with a leather chassis); the HP ENVY Wood series (made with sustainably-sourced, genuine wood inlays); and the HP Elite Dragonfly (the world’s first notebook to use ocean-bound plastic). Among the honors: In 2021, HP received seven Green Good Design Awards from the European Centre for Architecture Art Design and Urban Studies and the Chicago Athenaeum: Museum of Architecture and Design.

Today, Wolff and his team are in their recently outfitted studio, which opened late last year in HP’s Palo Alto headquarters. In the common areas, there is an inviting atmosphere of warm wood and soft, textured surfaces. Designers are tapping away at their keyboards, breaking off to share quick sketches and notes in an informal huddle around a digital whiteboard. In the gallery — an airy space that looks a lot like an upscale retail store — foam models, proof-of concept designs, and an array of laptop parts, keycaps, speakers, and circuit boards are splayed out on stark white countertops. Light from the courtyard pours in from the floor-to-ceiling windows.

“The studio has become a home,” says Wolff, who’s been with the company for 27 years. “When you think about a house, where does everybody go? Where is the love, and creation, and the stories being told? All that is shared in the kitchen.”

Granted this kitchen also has a really, really nice espresso maker.

The new space, like the kitchen, bubbles with energy and fuels the collaborative process, which was somewhat stifled when everyone was working remotely. “Creativity is a magical thing,” Wolff says. “That’s why it’s so important to design in a common space. We took for granted the process of organic product development. When you work from home, it becomes almost serial development. There’s no serendipity.”

After months of improvising the tools they needed to work together, the team finds that being back in the office is where they can be most creative and efficient. “Designers are very hands-on,” says Kevin Massaro, vice president of consumer design. “Everything in the studio is tactile.”

Yet, the time spent working remotely produced valuable insights that are informing future products, such as a PC camera disaggregated from the monitor so it can be manipulated to capture something on a person’s desk (like a sketch); super-wide-screen displays with integrated light bars that offer a soft backlight for people working late at night; and monitors that adjust to taller heights, to better accommodate a standing desk.

In accurate years, the team has also turned its sights toward defining — and redefining — what sustainable design means for HP. In 2021 HP announced some of the most aggressive and comprehensive climate goals in the technology industry, bringing new complexity — and new gravitas — to what Wolff and his team are aiming to accomplish.

“You’re no longer just a company that’s manufacturing technology, you’re a company that’s helping to better people’s lives,” Wolff says. Working toward HP’s goal to become the most sustainable and just technology company is less about integrating greater percentages of recycled materials into new products, and more about an accounting of the entire life cycle of a device, from the electricity used over its lifetime and the minerals mined for its batteries, to the chemicals used in its painted powder coating and what exactly happens to a product when returned for recycling.

When a customer opens a box made of 100% recycled molded fiber packaging to reveal the premium Elite Dragonfly PC, which made waves for being the first notebook with ocean-bound plastic, that’s where this team’s efforts become tangible.

The Dragonfly isn’t only a triumph of design, it proved that circularity can be an integral part of mass-manufacturing for personal electronics. The third generation of that same device, released in March (see “How the HP Elite Dragonfly Took Flight,” page 36), raised the bar for battery life and weight with a new process that fuses aluminum and magnesium in the chassis, the latter of which is both lightweight and 100% recyclable.

This was a feat of engineering alchemy, says Chad Paris, Global Senior Design Manager. “Not only do you have different properties of how these metals work together, it was a challenge to make sure that it’s seamless,” he says. The team innovated and came up with a thermofusion process that lends a premium feel to the Dragonfly while keeping its weight at just a kilogram.

This inventiveness dovetails with Wolff’s pragmatic approach to sustainability. Not only does each change have to scale for a manufacturer the size of HP, it has to strike the right balance between brand integrity and forward-leaning design. “We can take waste and make great things,” Wolff says, gesturing at a pile of uniform plastic pellets that used to be a discarded bottle. “But ultimately, we want our products to live longer, so we’re designing them to have second lives.”

A sustainable HP notebook, no matter what materials it’s made from, needs to look and feel like HP made it, says Sandie Cheng, Global CMF Director. The CMF (colors, materials, finishes) library holds thousands of fabric swatches, colored tiles, and paint chips and samples, which Cheng uses as inspiration for the look and feel of fine details such as the touch pad on a laptop, the smooth glide of a hinge, or the sparkle of the HP logo peeking through a laser-etched cutout.

Cheng and her team head out on scouting trips to gather objects from a variety of places and bring them back to the studio, composing their own ever-changing mood board. In the CMF library, there are Zen-like ceramic-and-bamboo vessels picked up from an upscale housewares boutique in San Francisco alongside scores of upholstery samples in chic color palettes, hunks of charred wood, and Nike’s Space Hippie trainers.

Most of these materials will never make it to production, but they offer up a rich playground for the team’s collective imagination. Foam made from mycelium (i.e., fungi threads) is an organic material that can be grown in just two weeks. Perhaps one day it could be used as material to cover the Dragonfly chassis, even if right now it couldn’t survive the daily wear and tear we put on our PCs. Or its spongy, earthy texture might inspire a new textile that lends a softer feel to an otherwise hard-edged device on your desk.

“We as designers have to think outside the box to stay creative and inspired, but we also have to develop materials that can be used for production,” Cheng says. “It’s a balance of staying creative and also being realistic.”

The same holds true for how the materials are made. Manufacturing with fabric is notorious for producing massive amounts of waste because of the way patterns are cut, but HP wants to change that with its own soft goods, such as the HP Renew Sleeve. It’s made with 96% recycled plastic bottle material, and importantly, the 3D knitting process used to make the laptop sleeve leaves virtually zero waste, generating only a few stray threads.

Earlier this month, Cheng and her team went to Milan, Italy, for fresh inspiration. They attended Salone del Mobile 2022, one of the industry’s largest textile, furniture, and home design trade shows, to get a sense of the big design trends of the next few years, including what Cheng calls “the centered home,” which evokes feelings of comfort, coziness, and calm.

She explains that the blurring of work and life means that what consumers want in their next device, whether it’s one issued by their company or selected from a store shelf, is something that looks and feels like it fits into their personal spaces. “Your PC should be really versatile and adapt to whichever environment you’re in and how you want to use it,” she says.

Consumers also want to feel good about their purchase, which increasingly means choosing brands that care for the finite resources on our shared planet. A 2021 report by research firm IDC found that 43% of 1,000 decision-makers said sustainability was a critical factor in their tech-buying choices.

As the Personal Systems designers charge ahead into a sustainable future — whatever it brings — they’ll surely do it in their iterative, measured, and collaborative way.

“When it comes to sustainability, it’s all about forward progress, and everyone’s job is a sustainability job,” Wolff says. “As founder Dave Packard said, ‘The betterment of our society is not a job to be left to the few. It’s a responsibility to be shared by all.’”

View additional multimedia and more ESG storytelling from HP Inc. on 3blmedia.com

View source version on newsdirect.com: https://newsdirect.com/news/how-hp-designers-think-about-sustainable-pcs-440842278

Wed, 06 Jul 2022 03:20:00 -0500 en-US text/html https://finance.yahoo.com/news/hp-designers-think-sustainable-pcs-151009878.html
Killexams : Online Exclusive Technical Q&A: AI’s use in chemical plant operations

7/15/2022

Hydrocarbon Processing (HP) sat down with Dr. Hiroaki Kanokogi (HK), General Manager, Yokogawa to discuss how artificial intelligence (AI) can and is being used in the hydrocarbon processing industry. Dr. Kanokogi’s organization recently announced that it used AI to autonomously control a chemical plant for 35 consecutive days. The AI used in this control experiment, the Factorial Kernal Dynamic Policy Programming (FKDPP) protocol, was jointly developed by Yokogawa and the Nara Institute of Science and Technology.

 HP: What makes this AI (FKDPP) different from other forms of AI that can be applied in plant operations?

HK: In the industrial AI sector, the vast majority of AI is what we call “problem analysis AI.” This kind of AI analyzes the data that is provided to detect anomalies for predictive maintenance, predict quality or determine the cause of issues. It is generally used to support human decision-making.

In this case with a chemical plant, we are talking about autonomous control AI, which searches for the optimal control model by itself and then implements that. There are several forms of AI for control (TABLE 1); however, based on the analysis of a global survey in February 2022, our organization confirmed that there were no other forms of AI that directly change the manipulative variable in a chemical plant. We are very confident about this. This uniqueness can deliver a great benefit to customers, as this next-generation control technology can control operations that have been beyond the capabilities of existing control methods (PID control/APC) and have up to now necessitated manual operation based on the judgements of plant personnel.

TABLE 1. Primary characteristics of AI used in plant control

Type

 

Features

Benefits

Autonomous control

For areas that cannot be automated with existing control methods (PID control/APC), the AI deduces the optimum method for control on its own and has the robustness to autonomously control, to a certain extent, situations that have not yet been encountered.

Based on the control model it learns and deduces, the AI inputs the level of control (manipulative variable) required for each situation.

The benefits of FKDPP are as follows:

(1) Can be applied in situations where control cannot be automated with existing control techniques (PID control and APC), and can handle conflicting targets, such as achieving both high quality and energy savings.
(2) Increases productivity (quality, energy saving, yield, shorter settling time)
(3) Simple (small number of learning trials, no need to import labeled data)
(4) Explainable operation
(5) Same safety as conventional systems (highly robust, can be directly linked to existing integrated production control systems)

Support for areas with automation built-in

AI can take over the task, currently performed by operators, of inputting target values (set value) for areas where automation has been implemented using existing control methods (PID control/APC).

AI uses past control data to perform calculations and enters target values (set value).

・Automation of manual tasks and achievement of stable operations is possible.

Operational support for people

AI proposes target values (set value) that operators will refer to when performing operations.

AI uses past control data to suggest target values (set value) to humans.

・Differences due to operator proficiency level will disappear.

HP: What were the major benefits of incorporating AI within the chemical plant setting? 

HK: It could autonomize an area that could not be automated with existing control methods, while ensuring safety and improving productivity.

Until now, there have been many parts of the plant that have not been fully automated. The next generation control technology using reinforcement learning-based AI (FKDPP) will autonomize areas that could not be automated with existing control methods while ensuring safety and improving productivity. FKDPP is a disruptive innovation that allows for a different dimension of control, particularly in such areas. This AI technology can be applied in the energy, materials, pharmaceuticals, and many other industries where the daily monetary value of operations in large-scale plants is in the range of tens of millions of dollars. Autonomous control AI (FKDPP) can greatly contribute to the autonomization of production around the world, ROI maximization, and environmental sustainability, and will have a major economic impact.

HP. How can FKDPP generate control model in only around 30 learning trials?

HK: Autonomous control is possible with our unique and original algorithm that requires only around 30 learning trials. Yokogawa has been developing the control AI since 2017. Yokogawa’s core competence and strength lies in measurement, control, aggregating information, and producing value. This unique AI algorithm incorporates our operational technology (OT) know-how on the gathering of sensor data from throughout plants to optimize plant operation and control. By implementing the knowledge Yokogawa has for controlling plants, we can eliminate the number of calculations drastically and generate the control model with that number.

There is no AI that is fit for all purposes. Wolpert and other people proved mathematically that "machine learning can produce excellent results when it is domain-specific" in 1995. This is a famous theory to predict the development of AI and machine learning by domain. AI specific to a particular field or domain may exceed human capabilities. So, both deep understanding on AI itself and the domain knowledge are required.

HP.  Can AI do all operations? Or are plant personnel still needed for operations?

HK: In the field trial, the AI directly controlled the operations through the DCS without the need for human intervention. This AI has the potential to be used for controlling a wide range of operations across a variety of industries. However, although the AI can carry out optimal operations of the controlling point, plant personnel are still needed to monitor the status in the control room, just like we do for PID control and APC.

HP. Where can we utilize FKDPP in the energy industry?

HK: There are still many operations in the energy industry that are difficult to control automatically, and so are basically still managed manually by skilled operators. This is because chemical reactions tend to be nonlinear and are affected by disturbances, so that makes it difficult to use a mathematical approach using PID. We think that there is a possibility to enable optimal and autonomous control in such difficult areas.

One example is the control of the large boilers that produce the steam used by rotating turbines for thermal power generation. A related application is gas combustion control in gas turbines. Regarding renewable energy, controlling how geothermal energy is efficiently used for power generation is quite a challenge. FKDPP allows us to leapfrog the automation stage and go directly from manual to autonomous control in these kinds of areas.

HP: What was a major takeaway from this exercise? 

HK: The biggest takeaway was that we can ensure safe autonomous control with AI that improves productivity and reduces cost and time loss.

This test confirmed that reinforcement learning AI can be safely applied in an genuine plant and demonstrated that this technology can control operations that have been beyond the capabilities of existing control methods (PID control/APC) and have up to now necessitated the manual operation of control valves based on the judgements of plant personnel. Also, losses in the form of fuel, labor costs, time, etc. that occur due to production of off-spec products were eliminated.

HP: What's next for this form of AI? Do you plan on deploying this on other petrochemical/refining units?

HK: We are certainly looking to work with customers on field trials for other processes and applications to confirm the versatility and robustness of FKDPP, and demonstrate the value in terms of the profitability and sustainability benefits it can deliver. This time, we established and Checked the three steps for ensuring safe operations. Next, we need to streamline this process so that customers can test and deploy this technology as quickly as possible.

HP: Going forward, do we foresee AI replacing the traditional method (PID)? Or will it be limited to few niche applications?

HK: FKDPP can be applied to most kinds of control including situations that could not be automated with existing control techniques (PID control, APC). Not only that, we have confirmed in a variety of application experiments that FKDPP can achieve stabilization 1/2 to 1/3 quicker than conventional control (PID control), without overshooting. This characteristic will be beneficial for customers who have furnaces and injection molding machines.

BIO

Dr. Hiroaki Kanokogi is the General Manager at Yokogawa. He joined Yokogawa in 2007 and is currently pursuing the development, application, and commercialization of AI designed for production sites. Dr. Kanokogi is one of the inventors of the FKDPP algorithm, and he was previously engaged in machine learning application R&D at Microsoft Japan. He holds a Ph.D. from the University of Tokyo.

Related News

From the Archive

Thu, 14 Jul 2022 12:00:00 -0500 text/html https://www.hydrocarbonprocessing.com/news/2022/07/online-exclusive-technical-qa-ai-s-use-in-chemical-plant-operations
Killexams : Save Money And Have Fun Using IEEE-488

A few months ago, I was discussing the control of GPIB equipment with a colleague. Based on only on my gut feeling and the briefest of research, I told him that the pricey and proprietary GPIB controller solutions could easily be replaced by open-source tools and Linux. In the many weeks that followed, I almost abandoned my stance several times out of frustration. With some perseverance, breaking the problems into bite-sized chunks, and lots of online searching to learn from other people’s experiences, my plan eventually succeeded. I haven’t abandoned my original stance entirely, I’ve taken a few steps back and added some qualifiers.

What is GPIB?

Example of HP-IB block diagram from the 1970s, from hp9845.net

Back in the 1960s, if test equipment was interconnected at all, there weren’t any agreed-upon methods for doing so. By the late 60s, the situation was made somewhat better by card-cage controller systems. These held a number of interface cards, one per instrument, presenting a common interface on the backplane. Although this approach was workable, the HP engineers realized they could significantly Boost the concept to include these “bridging circuit boards” within the instruments and replacing the card cage backplane with passive cables. Thus began the development of what became the Hewlett-Packard Interface Bus (HP-IB). The October 1972 issue of the HP Journal introduced HP-IB with two main articles: A Practical Interface System for Electronic Instruments and A Common Digital Interface for Programmable Instruments: The Evolution of a System.

To overcome many of the problems experienced in interconnecting instruments and digital devices, a new interface system has been defined. This system gives new ease and flexibility in system interconnections. Interconnecting instruments for use on the lab bench, as well as in large systems, now becomes practical from the economic point of view.

HP subsequently contributed HP-IB to the IEC, where it became an international standard. Within a few years it become what we know today as the GPIB (General Purpose Interface Bus) or IEEE-488, first formalized in 1975.

The Task At Hand

Why did I need to use a 50-year old communications interface? Since GPIB was the de-facto interface for so many years, a lot of used test equipment can be found on the second-hand market for very reasonable prices, much cheaper than their modern counterparts. Also, the more pieces of test equipment ending up on lab benches means less of them end up in the recycling system or landfills. But I don’t need these justifications — the enjoyment and nostalgic feeling of this old gear is reason enough for me.

Diagram of a typical digipot, the TPL0501 (from Digikey Article Library)

But why would you want to talk to your test equipment over a computer interface in the first place? In my case, I had a project where I needed to calibrate the resistance of a digipot at each of its programmable wiper positions. This would let me create a calibration algorithm based on measured data, where you could input the desired ohmic value and obtain the corresponding wiper register value. Sure, I could make these measurements by hand, but with 256 wiper positions, that would get tedious real fast. If you want to learn more about digipots, check out this article from the Digikey’s library on the fundamentals of digital potentiometers and how to use them.

Used Keithley 195A Bench DMM from c.1982

I scored a used Keithley 195A digital multimeter from the early 1980s. This is a 5-1/2 digit bench DMM, and my unit has the Model 1950 AC/Amps option installed.

Plan of Action

While searching around, I found a thesis paper (German) by [Thomas Klima] on using an easy-to-build GPIB interface shield on a Raspberry Pi or a Pi Zero to communicate with lab instruments. His project is open source and well documented on GitHub pages (Raspberry Pi version here and Pi Zero version here) his elektronomikon website.

It is a simple circuit, supporting my gut-feeling assertion that GPIB is not that complicated and you could probably bit-bang it with an 8051. I assembled the project, and I had a Raspberry Pi Zero-W all ready to go.

Software wise, the shield utilizes the existing Linux kernel module linux-gpib. It looked easy to install and get running on the Pi in short order. After a couple of hours installing PyVisa and some instrument-specific libraries, I should be automatically recording data with Python scripts in less than a day. Alas, reality doesn’t always match our expectations.

GPIB Architecture

Bob “Mr Fancy Pants” Stern Operating a Rack of HP-IB Equipment in 1980

A little background perspective will be helpful in understanding the concept of GPIB. If we visited an electronics lab in the 60s, using a computer to control repetitive test sequences was the exception rather than the rule. Instead, you might see magnetic tape, paper tape, magnetic cards, or even cards onto which commands were marked in pencil. And for some setups computer control might not even be needed. For example, a temperature sensor might directly plot on a strip chart recorder or save values on a magnetic tape drive. If you remember that this is the world in which the HP engineers were immersed, the architecture makes sense.

OMR for the HP-3260A Marked Card Programmer (from Prof Jones’s Punch Card Collection, Univ of Iowa)

The GPIB is a flexible interconnection bus using 15 signals: 8 bit data bus and 7 bits of control lines. Any device on the bus can be a passive listener or an active talker. A talker can speak to multiple devices at the same time, and devices can raise an interrupt if they have an event that needs to be serviced. Devices are interconnected using cabling and connectors which were small for their day, but are a nuisance compared to today’s USB, Ethernet, and serial cabling. The 24-pin Centronics connector allows for easy daisy chaining of devices, but is a hefty beast — in a pinch, you could use a GPIB cable effectively as nunchucks.

GPIB Cables Can Serve as Nunchucks in a Pinch

The traditional use of GPIB was a central control computer connected a chain or star cluster of test gear. This has historically influenced the available GPIB interface hardware. For decades, ISA and later PCI interface cards were installed in computers, or the GPIB interface might be integrated if you were using an HP computer. They tended to be a bit expensive, but since one interface board controlled all the instruments, you only needed one card in a given test setup. National Instruments has became the leader in the GPIB world of both interface cards and supporting drivers and software, but their proprietary software and reputation for steep prices is a bit off-putting for many small companies and home labs.

You can certainly implement an automatic test setup entirely using GPIB cabling, 1970s-style. Many such legacy systems still exist, in fact, and still have to maintained. But more than likely, our use of GPIB these days would be to adapt one or two instruments so they can be used in your non-GPIB test setup, be that LAN, USB, serial, or some combination thereof. This turns the economics of the situation upside down, and is why low-cost GPIB adaptors for just one instrument are sought after.

Let the Problems Begin

The Pi Zero-W has built-in WiFi — in fact, that’s the only LAN connection unless you connect up external circuitry. But I couldn’t get it to connect to my WiFi router. For the longest time, I thought this was an operator error. I have quite a few Raspberry Pi 3s and 4s using WiFi mode with no issues. As I started troubleshooting the problem, I learned that the network management tools in Debian / Raspberry Pi OS have changed over the years. There are many tutorials showing different ways configure things, some of them being obsolete.

A headless Pi Zero-W was really dead without any LAN connection, so I assembled a rat’s nest of USB cabling and an HDMI adaptor so I could at least get a prompt, and ordered a couple of USB-LAN adaptors to get me online temporarily. Hours and hours of searching and testing ideas, I finally found a couple of obscure posts which suggested that the Pi Zero-W’s radio had problems connecting in some countries — South Korea was on that list.

Indeed this was the issue. I could temporarily change my router’s WiFi country to the USA, and the Pi Zero-W would connect just fine. I couldn’t leave it like that, so I switched back to South Korea and continued using wired LAN cabling for my immediate work. This particular problem does have a good ending, however. On the Raspberry Pi forums, one of their engineers was able to confirm the bug, and submitted a change request to Cypress Semiconductors. Some weeks later, we got a proposed updated firmware to test. It solved the problem and hopefully will be added in an upcoming release.

Router Goes Crazy

At this point, I have a couple of Pi Zeroes, a Pi 4B, and a few USB-LAN adaptors all working. Since these USB-LAN adaptors can move around — an adaptor could be on computer ABC today and on computer XYZ tomorrow — I carefully labeled each adaptor and entered its particulars into the /etc/hosts and /etc/ethers files on my router. And my network promptly died. This was tough to solve, because surprise, extracting information from the router is awkward when the network is frozen. I finally figured out that I had mistakenly crossed up two entries for the USB-LAN adaptors in the router’s tables, and this drove OpenWRT crazy.

USB-LAN Interfaces Get MAC Address Labels

This took so long to find and solve, my solution was a bit overboard in hindsight. First of all, I completely wiped the router and re-installed the firmware from scratch. I also took the time to better organize my hostname and static lease data. I found this Gist from [Krzysztof Burghardt] that converts your /etc/hosts and /etc/ethers into OpenWRT’s /etc/config/dhcp file, and tweaked it to suit my needs. I bought a second backup router that I can quickly swap over if this happens again. And last, but not least, I broke down and bought a label printer to clearly mark these USB-LAN adaptors with their MAC addresses.

Ready to Go

Let’s Measure!

Finally, I’m ready to do real work on my project. Ignore the flying leads in the background are just for fun – they go to an Analog Discovery 2 logic analyzer to observe the GPIB signals. The wristwatch is a nod to my laziness — I put an old smartphone on a tripod to watch the meter in the lab, and monitored it from my office desktop PC while testing Python scripts. Every once in awhile the video would lock up, and I used the second hand as a sign of whether things were running smoothly or not. In part two of this saga, I’ll wrap up the measurement story, provide some more information on GPIB and its revisions, and show graphs from my automated test setup.

Wed, 13 Jul 2022 12:00:00 -0500 Chris Lott en-US text/html https://hackaday.com/2022/01/31/save-money-and-have-fun-using-ieee-488/
Killexams : POLY Secures Requisite Shareholder Approval for HP Merger No result found, try new keyword!Poly POLY has secured shareholders’ approval for its proposed merger with HP Inc. HPQ ... platforms and increased use of software and test tools. The company has built a strong foundation ... Sat, 25 Jun 2022 18:49:00 -0500 text/html https://www.nasdaq.com/articles/poly-secures-requisite-shareholder-approval-for-hp-merger Killexams : HP Board Result 2020: Class 10 Improvement, Compartment Results Out At Hpbose.org
HP Board Result 2020: Class 10 Improvement, Compartment Results Out At Hpbose.org

HPBOSE Classes 10 Matric Supplementary Results Declared

New Delhi:

The Himachal Pradesh Board of School Education (HPBOSE) has declared the Class 10 improvement and compartment test results at hpbose.org. The overall pass percentage in Class 10 HPBOSE improvement and compartment exams is 49.75 per cent. The board had released the HPBOSE Class 10 results on June 9. The overall pass percentage this year in Himachal Pradesh Board Class 10 was 68.11 per cent.

Latest: Top Courses after 10th Class, Download Free!

Don't Miss: Which Stream is Best after 10th Class? Know Here

Also See: 10 Best Scholarships for Class 10 Students, Check Now

Students of HPBOSE Class 10 who wanted to Boost their marks or placed in the compartment category were allowed to appear for the improvement test, or additional exams and compartment exams in September. As many as 6,136 students registered for the HPBOSE improvement exams. However, only 3,042 students cleared the HP board compartment exams and 2,859 students have again been placed in the compartment category.

1604321375839

HPBOSE Class 10 Supplementary test Results -- Direct Link

HPBOSE Matric Supplementary test Results - To Download

Step 1: Visit the official website -- hpbose.org

Step 2: Click on the Results tab

Step 3: On the next window, select 10th (Compartment/Additional/Improvement) Examination Result, September-2020

Step 4: Insert roll number

Step 5: Submit and access HPBOSE Class 10 compartment, additional or improvement result 2020

Around one lakh students had written the HPBOSE 10th Class examination. The board held the Class 10 Himachal Pradesh exams between February 22 and March 19, 2020.

Re-Evaluation Of HPBOSE 10th Compartmennt Result 2020

Candidates who want to apply for re-evaluation or scrutiny of their compartment test result can apply online at hpbose.org till November 17, 2020. For re-evaluation of each paper, candidates will have to pay a fee of Rs 500 and for re-totaling, a fee of Rs 400 is to be paid.

To apply for re-evaluation, a candidate must score at least 20% in the subject. Offline applications will not be accepted by the board.

Sun, 01 Nov 2020 23:03:00 -0600 en text/html https://www.ndtv.com/education/hp-board-result-2020-class-10-improvement-compartment-results-out-at-hpboseorg
Killexams : How 3D printing will transform manufacturing in 2020 and beyond

Design News caught up with Paul Benning, chief technologist for HP 3D Printing & Digital Manufacturing to get an idea of where additive manufacturing is headed in the future. Benning explained that we’re headed for mixed-materials printing, surfaces innovation, more involvement from academic community, and greater use of software and data management.

Automated assembly with mixed materials

Benning believes we will begin to see automated assembly with industries seamlessly integrating multi-part assemblies including combinations of 3D printed metal and plastic parts.  “There’s not currently a super printer that can do all things intrinsically, like printing metal and plastic parts, due to factors such as processing temperatures,” Benning told Design News. “However, as automation increases, there’s a vision from the industry for a more automated assembly setup where there is access to part production from both flavors of HP technology: Multi Jet Fusion and Metal Jet.”

While the medical industry and recently aerospace are incorporated 3D printing into production, Benning also sees car makers as a future customer for additive. “The auto sector is a great example of where automated assembly could thrive on the factory floor.”

Benning sees a wide range of applications that might combine metal and plastics. “Benefits of an automated assembly for industrial applications include printing metals into plastic parts, building parts that are wear-resistant and collect electricity, adding surface treatments, and even building conductors or motors into plastic parts,” said Benning. “The industry isn’t ready to bring this technology to market just yet, but it’s an example of where 3D printing is headed beyond 2020.”

Surfaces will become an area of innovation

Benning sees a future where data payloads for 3D printed parts will be coded into the surface texture.  “It’s a competitive advantage to be able to build interesting things onto surfaces. HP has experimented with coding digital information into a surface texture. By encoding information into the texture itself, manufacturers can have a bigger data payload than just the serial number.”

He notes that the surface coding could be read by, humans for machines. “One way to tag a part either overtly or covertly is to make sure that both people and machines are able to read it based on the shape or orientation of the bumps. We have put hundreds of copies of a serial number spread across the surface of a part so that it’s both hidden and universally apparent.”

Benning sees this concept as p[art of the future of digital manufacturing. “This is one of our inventions that serves to tie together our technologies with the future of parts tracking and data systems,” said Benning.

Universities will introduce new ways to thinking

Benning believes that academia and training programs can offer new thought processes to liberate designers from old thinking and allow them to tap into technologies of the future. “3D printing’s biggest impact to manufacturing job skills lie on the design side,” said Benning. “You have a world of designers who have been trained in and grown up with existing technologies like injection molding. Because of this, people unintentionally bias their design toward legacy processes and away from technologies like 3D printing.”

Benning believes one solution for breaking old thinking is to train upcoming engineers in new ways of thinking. “To combat this, educators of current and soon-to-be designers must adjust the thought process that goes into designing for production given the new technologies in the space,” said Benning. “We recognize this will take some time, particularly for universities that are standing up degree programs.” He also believes new software design tools will guide designers to make better use of 3D printing in manufacturing.

Software and data management is critical to the 3D printing future

Benning believes advancements in software and data management will drive improved system management and part quality. This will then lead to better customer outcomes. “Companies within the industry are creating API hooks to build a fluid ecosystem for customers and partners,” said Benning.

HP is beginning to use data to enable ideal designs and optimized workflows for Multi Jet Fusion factories. “This data comes from design files, or mobile devices, or things like HP’s FitStation scanning technology and is applied to make production more efficient, and to better deliver individualized products purpose-built for their end customers.” The goal of that individualized production can support custom products build with mass production manufacturing techniques, leading to a batch-of-one or mass customization.

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other courses he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

January 28-30: North America's largest chip, board, and systems event, DesignCon, returns to Silicon Valley for its 25th year! The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard? Register to attend!

Thu, 07 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/automation-motion-control/how-3d-printing-will-transform-manufacturing-2020-and-beyond
Killexams : HYPERCAR HORSEPOWER: SSC TUATARA ENGINE TAKES TO THE DYNO No result found, try new keyword!The production version of the SSC Tuatara debuted in 2020 with a twin-turbo 5.9L flat-plane crank V8 that made 1,750 horsepower on E85 and 1,350 hp using 91-octane fuel, and is built by Nelson Racing ... Thu, 07 Jul 2022 03:16:54 -0500 en-us text/html https://www.msn.com/en-us/autos/autosnews/hypercar-horsepower-ssc-tuatara-engine-takes-to-the-dyno/ar-AAZk5oI Killexams : Making Travel Plans in India? Here's What You Should Know

Although the number of recovered COVID-19 patients released from hospitals in India went up (as of November 26), the Union Ministry of Health and Family Welfare also recorded a rise in the number of active patients as well as deaths. With the number of infected people in many states on the rise, steps are being taken by various local governments to bring down the numbers and prevent further spread. So before making your travel plans, check for the latest rules.

Recently, Himachal Pradesh has declared that Lahaul-Spiti valley will remain closed to visitors until March/April next year. Apart from fear of rising numbers, the harsh winter, the difficult terrain, and limited medical facilities have compelled the HP government to suspend tourist activities in the valley, including shutting of the famous Atal Tunnel connecting Manali with Spiti. The government has also declared night curfews (8pm to 6am) in Mandi, Kangra, Shimla and Kullu districts till December 15.

Representative image: Night curfew timings vary from place to place

Night curfew in select cities have also been declared by Punjab, Rajasthan (districts of Jaipur, Bikaner, Udaipur, Ajmer, Jodhpur, and Kota), Gujarat (Ahmedabad, Rajkot, Surat and Vadodara) and Madhya Pradesh (Indore, Bhopal, Gwalior, Vidisha and Ratlam) However, the duration and daily timings may differ from state to state. The opening and closing of local markets, hotels and restaurants may be moderated in line with the curfew. Public gatherings, including social events such as weddings, may also have to follow prescribed timings and rules.

Maharashtra, which had been one of the worst affected Indian states, has made a negative RT-PCR COVID-19 test report mandatory for people visiting the state from Delhi, Rajasthan, Gujarat and Goa. The report should not be more than three days (72 hours) in case of airline passengers and four days for rail passengers. According to local media reports, air passengers without a negative test report will have to get themselves checked at the airport for a fee.

In Uttar Pradesh, the state government has imposed Section 144 in the capital Lucknow, thus prohibiting the assembly of four or more persons in any particular place.

Lockdowns over the weekends have been imposed in Uttarakhand’s Dehradun district. Barring essential services, all stores will remain closed.

Apart from short-period lockdowns and night curfews, many states are implementing rapid antigen tests on those travelling by road at state or city borders, fining people for not wearing masks, etc.

Meanwhile, with the rise in COVID-19 cases, the central government too has announced new guidelines for the period between December 1 and 31. The Union Ministry of Home Affairs have asked states to implement containment measures on crowds. The Ministry of Home Affairs (MHA) in the fresh guidelines for COVID-19 surveillance asked the states to strictly enforce containment measures and regulate crowds. The guidelines will be effective from December 1 to December 31.

Fri, 27 Nov 2020 17:14:00 -0600 en text/html https://www.outlookindia.com/outlooktraveller/travelnews/story/70892/latest-travel-restrictions-and-night-curfews-in-indian-states-that-you-should-know-about
Killexams : HP Reverb Review – An Impressive Headset Stuck with Windows VR Controllers

Reverb is HP’s second VR headset, and this time around the company is aiming mainly at the enterprise market, but not shying away from selling individual units at a consumer price point. As the highest resolution headset presently available at that consumer price point, it has a unique selling point among all others, though the usual compromises of Windows Mixed Reality still apply.

HP Reverb Review

Photo by Road to VR

To be up front, the HP Reverb headset itself is a solid improvement over its predecessor by most measures. The new design is comfortable and feels higher quality. The new displays and lenses offer a considerably better looking image. And on-board audio is a huge plus. However, while its hardware has improved in many ways, it’s still a ‘Windows Mixed Reality’ headset, which means it shares the same irksome controllers as all Windows VR headsets.

Reverb’s headlining feature is its high-resolution LCD displays, which are significantly more pixel dense than any headset in its class. On paper, we’re talking about 2,160 × 2,160 per display, which is a big step up over the next highest resolution headsets in the same class—the Valve Index, showcasing a resolution of 1,440 × 1,600 per display (also LCD, which means full RGB sub-pixels), and HTC Vive Pro’s dual 1,440 × 1,600 AMOLEDs, which feature an RGBG PenTile pixel matrix. Among the three, Reverb has a little more than twice the total number of pixels.

Photo by Road to VR

There’s no doubt that Reverb’s displays are very sharp, and very pixel dense. It’s impossible to focus on a single pixel, and the screen door effect (unlit spaces between pixels) is on the verge of being difficult to see. It has the best resolving power of any headset in its class, which means textures, edges, and text are especially crisp.

This is an example of a display with mura which shows varying brightness across the display; a perfect display would have perfectly consistent brightness from corner to corner.

Unfortunately, overall clarity is held back in a large way by plainly visible mura. At a glance, mura can look similar to the screen door effect (in the way that it’s ‘locked’ to your face and reduces clarity) but is actually a different artifact resulting from poor consistency in color and brightness across the display. It ends up looking like the display is somewhat cloudy.

As HP is mostly pushing Reverb for enterprise, they probably aren’t terribly concerned with this—after all, text legibility (a major selling point for enterprise customers) gets a big boost from the headset’s high resolution whether or not mura is present. For anyone interested in Reverb for visual immersion though, the mura unfortunately hampers where it might be otherwise.

There’s also a few other curious visual artifacts. There’s a considerable amount of chromatic aberration outside of the lenses’ sweet spot. There’s also subtle—but noticeable—pupil swim (varying distortion across the lens that appears as motion as your eye moves across the lens). In most headsets, these are both significantly reduced via software corrections, and I’m somewhat hopeful that they could be improved with better lens correction profiles for Reverb in the future. While I couldn’t spot any obvious ghosting or black smear, interestingly Reverb shows red smear, which is something I’ve never seen before. It’s the same thing you’d expect with black smear (where dark/black colors can bleed into brighter colors when you move your head, especially white), but in Reverb it manifests most when red (or any color substantially composed of red, including white) shares a boundary with a dark/black color. In my testing this hasn’t led to any significant annoyance but, as ever, it could be bothersome in some specific content.

From a field of view standpoint, HP claims 114 degrees diagonally for Reverb, which is higher than what’s typically quoted for headsets like the Rift (~100) and Vive (~110). Nobody in the industry really seems to agree what amounts to a valid field of view measurement though, and to my eyes, Reverb’s field of view falls somewhere between the two. So whether you call it 105 or 114, Reverb is in the same field of view class as most other PC VR headsets. These are Fresnel lenses, which means they are susceptible to god rays, which are about as apparent on Reverb as with accurate headsets like the Rift S, and a bit less prevalent than the original Rift and Vive.

Photo by Road to VR

Reverb’s other big feature is its major ergonomic redesign. HP has ditched the halo headstrap approach seen on every other Windows VR headset and instead opted for a much more (original) Rift-like design, including on-ear headphones. At least to my head, Reverb’s ergonomics feel like a big improvement over HP’s original Windows VR headset.

I found it quite easy to use for an hour or more while maintaining comfort. As with all headsets of this design, the trick is knowing how to fit it right (which isn’t usually intuitive). New users are always tempted to tighten the side straps and clamp the headset onto their face like a vice, but the key is to find the spot where the rear ring can grip the crown of your head, then tighten the top strap to ‘lift’ the visor so that it’s held up by ‘hanging’ from the top strap rather than by sheer friction against your face. The side straps should be as loose as possible while still maintaining stability.

Photo by Road to VR

I was able to get Reverb to feel very comfortable, but I’m a little panic that the headset won’t easily accommodate larger heads or noses. Personally speaking, I don’t fall on either ends of the spectrum for head or nose size, so I’m guessing I’m fairly average in that department. Even so, I had Reverb’s side straps as loose as they would possibly go in order to get it to fit well. If I had a bigger head, the straps themselves wouldn’t have more room to accommodate; all the extra space would be made up by further stretching the springs in the side struts, which would put more pressure on my face than is ideal.

I also felt like I was pushing the limits of the headphones and the nose gap. The best fit for the headphones is to have them all the way in their bottom position; if there were a greater distance between the top of my head and my ears, or if I preferred the top strap adjustment more tightly, the headphones wouldn’t be able to extend far enough down to be centered on my ears.

With the nose gap, I was feeling a bit of pressure on the bridge of my nose, and actually opted to remove the nose gasket entirely (the piece of rubber that blocks light), which gave me just enough room to not feel like the headset was in constant contact with the bridge of my nose. If you have a larger nose or a greater distance between the center of your eye and your nose’s bridge, you might find the nose gap on Reverb annoyingly small.

Photo by Road to VR

As with most other Windows VR headsets, Reverb lacks a hardware IPD adjustment, which means only those near to the headset’s fixed IPD setting will have ideal alignment between their eyes and the optical center of the lenses. We’ve reached out to HP to confirm the headset’s fixed IPD measurement, though I expect it to fall very close to 64mm. If you are far from the headset’s fixed figure, you’ll unfortunately lose out on some clarity.

So, if it fits, Reverb from a hardware standpoint is a pretty solid headset, and the singular choice for anyone prioritizing resolution over anything else. However, Reverb can’t escape the caveats that come with all Windows VR headsets.

Mostly that’s the controllers and their tracking. Reverb uses the same Windows VR controllers as every other Windows VR headset except for Samsung (which has slightly different controllers). Yes, they work, but they are the worst 6DOF controllers on the market. They’re flimsy, bulky, and not very ergonomic. They actually track quite well from a performance standpoint, but their tracking coverage hardly extends outside of your field of view, which means they lose tracking any time your hands linger outside of the sensor’s reach, even if that means just letting them hang naturally down by your sides.

Photo by Road to VR

The tracking coverage issue is primarily driven by the tracking system used in every Windows VR headset: a two-camera inside-out system. HP says Reverb’s tracking is identical to the first generation headsets, and as such, Reverb’s two cameras lose controller tracking as often as its Windows VR contemporaries. Luckily, the headtracking itself is pretty darn good (on par with Rift S in my experience so far), and so is controller tracking performance when near the headset’s field of view. For content where your hands are almost always in your field of view (or only leave it briefly), Windows VR controller tracking can work just fine. In fact, Reverb holds up very well when playing Beat Saber on its highest difficulty because your hands don’t spend much time outside of the field of view before entering it again (to slice a block). But there’s tons of content where you hands won’t be consistently held in the headset’s field of view, and that’s when things can get annoying.

Photo by Road to VR

For all of its downsides, the Windows VR tracking system also means that Reverb gets room-scale 360 tracking out of the box and doesn’t rely on any external sensors. That’s great because it means relatively easy setup, and support for large tracking volumes.

The compromises on the controller design and tracking were easy to swallow considering how inexpensively you could find a Windows VR headset ($250 new in box is not uncommon). But Reverb has introduced itself as the new premium option among Windows VR headsets at $600, which shines a much brighter light on the baggage that comes with every Windows VR headset to date.

While Windows Mixed Reality—which is built into Windows and comes with its very own VR spatial desktop—is the native platform for Reverb and all other Windows VR headsets, there’s an official plugin that makes it compatible with most SteamVR content, which vastly expands the range of content available on the headset.


Disclosure: HP provided Road to VR with a Reverb headset.

Sun, 16 Aug 2020 09:53:00 -0500 Ben Lang en-US text/html https://www.roadtovr.com/hp-reverb-review-vr-headset/
HP2-H78 exam dump and training guide direct download
Training Exams List