Save ATTA free pdf downloaded from and read

Guarantee that you have ASTQB ATTA VCE of genuine inquiries for the Advanced Level Technical Test Analyst Exam prep before you step through the genuine examination. We give the most refreshed and legitimate ATTA Dumps that contain ATTA actual test questions. We have gathered and made a data set of ATTA dumps from real

Exam Code: ATTA Practice exam 2022 by team
ATTA Advanced Level Technical Test Analyst

Exam ID : ATTA
Exam Title : Advanced Technical Test Analyst (ASTQB)
Number of Questions in exam : 45
Passig Score : 65%
Exam Type : Multiple Choice Questions

- Summarize the generic risk factors that the Technical Test Analyst typically needs to consider.
- Summarize the activities of the Technical Test Analyst within a risk-based approach for testing activities.
- Write test cases from a given specification item by applying the Statement testing test technique to achieve a defined level of coverage.
- Write test cases from a given specification item by applying the Modified Condition/Decision Coverage (MC/DC) test technique to achieve coverage.
- Write test cases from a given specification item by applying the Multiple Condition testing test technique to achieve a defined level of coverage.
- Write test cases from a given specification item by applying McCabe's Simplified Baseline Method.
- Understand the applicability of API testing and the kinds of defects it finds.
- Select an appropriate white-box test technique according to a given project situation.
- Use control flow analysis to detect if code has any control flow anomalies.
- Explain how data flow analysis is used to detect if code has any data flow anomalies.
- Propose ways to Boost the maintainability of code by applying static analysis.
- Explain the use of call graphs for establishing integration testing strategies.
- Apply dynamic analysis to achieve a specified goal.
- For a particular project and system under test, analyze the non-functional requirements and write the respective sections of the test plan.
- Given a particular product risk, define the particular non-functional test type(s) which are most appropriate.
- Understand and explain the stages in an applications lifecycle where non-functional tests should be applied.
- For a given scenario, define the types of defects you would expect to find by using non-functional testing types.
- Explain the reasons for including security testing in a test strategy and/or test approach.
- Explain the principal aspects to be considered in planning and specifying security tests.
- Explain the reasons for including reliability testing in a test strategy and/or test approach.
- Explain the principal aspects to be considered in planning and specifying reliability tests.
- Explain the reasons for including performance testing in a test strategy and/or test approach.
- Explain the principal aspects to be considered in planning and specifying performance efficiency tests.
- Explain the reasons for including maintainability testing in a testing strategy and/or test approach.
- Explain the reasons for including portability tests in a testing strategy and/or test approach.
- Explain the reasons for compatibility testing in a testing strategy and/or test approach.
- Explain why review preparation is important for the Technical Test Analyst.
- Analyze an architectural design and identify problems according to a checklist provided in the syllabus.
- Analyze a section of code or pseudo-code and identify problems according to a checklist provided in the syllabus.
- Summarize the activities that the Technical Test Analyst performs when setting up a test automation project.
- Summarize the differences between data-driven and keyword-driven automation.
- Summarize common technical issues that cause automation projects to fail to achieve the planned return on investment.
- Construct keywords based on a given business process.
- Summarize the purpose of tools for fault seeding and fault injection.
- Summarize the main characteristics and implementation issues for performance testing tools.
- Explain the general purpose of tools used for web-based testing.
- Explain how tools support the practice of model-based testing.
- Outline the purpose of tools used to support component testing and the build process.
- Outline the purpose of tools used to support mobile application testing.

1. The Technical Test Analyst's Tasks in Risk-Based Testing
product risk, risk assessment, risk identification, risk mitigation, risk-based testing Learning Objectives for The Technical Test Analyst's Tasks in Risk-Based Testing Risk-based Testing Tasks
- Summarize the generic risk factors that the Technical Test Analyst typically needs to consider
- Summarize the activities of the Technical Test Analyst within a risk-based approach for testing activities
1.1 Introduction
The Test Manager has overall responsibility for establishing and managing a risk-based testing strategy. The Test Manager usually will request the involvement of the Technical Test Analyst to ensure the risk-based approach is implemented correctly. Technical Test Analysts work within the risk-based testing framework established by the Test Manager for the project. They contribute their knowledge of the technical product risks that are inherent in the project, such as risks related to security, system reliability and performance.
1.2 Risk-based Testing Tasks
Because of their particular technical expertise, Technical Test Analysts are actively involved in the following risk-based testing tasks:
• Risk identification
• Risk assessment
• Risk mitigation
These tasks are performed iteratively throughout the project to deal with emerging product risks and changing priorities, and to regularly evaluate and communicate risk status.
1.2.1 Risk Identification
By calling on the broadest possible sample of stakeholders, the risk identification process is most likely to detect the largest possible number of significant risks. Because Technical Test Analysts possess unique technical skills, they are particularly well-suited for conducting expert interviews, brainstorming with co-workers and also analyzing the current and past experiences to determine where the likely areas of product risk lie. In particular, Technical Test Analysts work closely with other stakeholders, such as developers, architects, operations engineers, product owners, local support offices, and service desk technicians, to determine areas of technical risk impacting the product and project. Involving other stakeholders ensures that all views are considered and is typically facilitated by Test Managers.
Risks that might be identified by the Technical Test Analyst are typically based on the [ISO25010] quality characteristics listed in Chapter 4, and include, for example:
• Performance efficiency (e.g., inability to achieve required response times under high load conditions)
• Security (e.g., disclosure of sensitive data through security attacks)
• Reliability (e.g., application unable to meet availability specified in the Service Level Agreement)
1.2.2 Risk Assessment
While risk identification is about identifying as many pertinent risks as possible, risk assessment is the study of those identified risks in order to categorize each risk and determine the likelihood and impact associated with it. The likelihood of occurrence is usually interpreted as the probability that the potential problem could exist in the system under test.
The Technical Test Analyst contributes to finding and understanding the potential technical product risk for each risk item whereas the Test Analyst contributes to understanding the potential business impact of the problem should it occur.
Project risks can impact the overall success of the project. Typically, the following generic project risks need to be considered:
• Conflict between stakeholders regarding technical requirements
• Communication problems resulting from the geographical distribution of the development organization
• Tools and technology (including relevant skills)
• Time, resource and management pressure
• Lack of earlier quality assurance
• High change rates of technical requirements
Product risk factors may result in higher numbers of defects. Typically, the following generic product risks need to be considered:
• Complexity of technology
• Complexity of code structure
• Amount of re-use compared to new code
• Large number of defects found relating to technical quality characteristics (defect history)
• Technical interface and integration issues
Given the available risk information, the Technical Test Analyst proposes an initial risk level according to the guidelines established by the Test Manager. For example, the Test Manager may determine that risks should be categorized with a value from 1 to 10, with 1 being highest risk. The initial value may be modified by the Test Manager when all stakeholder views have been considered.
1.2.3 Risk Mitigation
During the project, Technical Test Analysts influence how testing responds to the identified risks. This generally involves the following:
• Reducing risk by executing the most important tests (those addressing high risk areas) and by putting into action appropriate mitigation and contingency measures as stated in the test plan
• Evaluating risks based on additional information gathered as the project unfolds, and using that information to implement mitigation measures aimed at decreasing the likelihood or avoiding the impact of those risks
The Technical Test Analyst will often cooperate with certified in areas such as security and performance to define risk mitigation measures and elements of the organizational test strategy. Additional information can be obtained from ISTQB® Specialist syllabi, such as the Advanced Level Security Testing syllabus [ISTQB_ALSEC_SYL] and the Foundation Level Performance Testing syllabus [ISTQB_FLPT_SYL].
2. White-box Test Techniques
API testing, atomic condition, control flow testing, cyclomatic complexity, decision testing, modified condition/decision testing, multiple condition testing, path testing, short-circuiting, statement testing, white-box test technique
Learning Objectives for White-Box Testing
2.2 Statement Testing
TTA-2.2.1 (K3) Write test cases for a given specification item by applying the Statement test technique to achieve a defined level of coverage
Decision Testing TTA-2.3.1 (K3) Write test cases for a given specification item by applying the Decision test technique to achieve a defined level of coverage 2.4 Modified Condition/Decision Coverage (MC/DC) Testing TTA-2.4.1 (K3) Write test cases by applying the Modified Condition/Decision Coverage (MC/DC) test design technique to achieve a defined level of coverage Multiple Condition Testing TTA-2.5.1 (K3) Write test cases for a given specification item by applying the Multiple Condition test technique to achieve a defined level of coverage 2.6 Basis Path Testing TTA-2.6.1 (K3) Write test cases for a given specification item by applying McCabes Simplified Baseline Method 2.7 API Testing TTA-2.7.1 (K2) Understand the applicability of API testing and the kinds of defects it finds 2.8 Selecting a White-box Test Technique TTA-2.8.1 (K4) Select an appropriate white-box test technique according to a given project situation 2.1 Introduction This chapter principally describes white-box test techniques. These techniques apply to code and other structures, such as business process flow charts. Each specific technique enables test cases to be derived systematically and focuses on a particular aspect of the structure to be considered. The techniques provide coverage criteria which have to be measured and associated with an objective defined by each project or organization. Achieving full coverage does not mean that the entire set of tests is complete, but rather that the technique being used no longer suggests any useful tests for the structure under consideration. The following techniques are considered in this syllabus: • Statement testing
• Decision testing
• Modified Condition/Decision Coverage (MC/DC) testing
• Multiple Condition testing
• Basis Path testing
• API testing
The Foundation Syllabus [ISTQB_FL_SYL] introduces Statement testing and Decision testing. Statement testing exercises the executable statements in the code, whereas Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes.
The MC/DC and Multiple Condition techniques listed above are based on decision predicates and broadly find the same types of defects. No matter how complex a decision predicate may be, it will evaluate to either TRUE or FALSE, which will determine the path taken through the code. A defect is detected when the intended path is not taken because a decision predicate does not evaluate as expected.
The first four techniques are successively more thorough (and Basis Path testing is more thorough than Statement and Decision testing); more thorough techniques generally require more tests to be defined in order to achieve their intended coverage and find more subtle defects.
2.2 Statement Testing
Statement testing exercises the executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage. Applicability
This level of coverage should be considered as a minimum for all code being tested.
Decisions are not considered. Even high percentages of statement coverage may not detect certain defects in the codes logic.
2.3 Decision Testing
Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes. To do this, the test cases follow the control flows that occur from a decision point (e.g., for an IF statement, one for the true outcome and one for the false outcome; for a CASE statement, test cases would be required for all the possible outcomes, including the default outcome).
Coverage is measured as the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the test object, normally expressed as a percentage. Compared to the MC/DC and Multiple Condition techniques described below, decision testing considers the entire decision as a whole and evaluates the TRUE and FALSE outcomes in separate test cases. Applicability
The most useful checklists are those gradually developed by an individual organization, because they reflect:
• The nature of the product
• The local development environment
o Staff
o Tools
o Priorities
• History of previous successes and defects
• Particular issues (e.g., performance efficiency, security)
Checklists should be customized for the organization and perhaps for the particular project. The checklists provided in this chapter are meant only to serve as examples.
Some organizations extend the usual notion of a software checklist to include “anti-patterns” that refer to common errors, poor techniques, and other ineffective practices. The term derives from the popular concept of “design patterns” which are reusable solutions to common problems that have been shown to be effective in practical situations [Gamma94]. An anti-pattern, then, is a commonly made error, often implemented as an expedient short-cut.
It is important to remember that if a requirement is not testable, meaning that it is not defined in such a way that the Technical Test Analyst can determine how to test it, then it is a defect. For example, a requirement that states “The software should be fast” cannot be tested. How can the Technical Test Analyst determine if the software is fast? If, instead, the requirement said “The software must provide a maximum response time of three seconds under specific load conditions”, then the testability of this requirement is substantially better assuming the “specific load conditions” (e.g., number of concurrent users, activities performed by the users) are defined. It is also an overarching requirement because this one requirement could easily spawn many individual test cases in a non-trivial application. Traceability from this requirement to the test cases is also critical because if the requirement should change, all the test cases will need to be reviewed and updated as needed.
5.2.1 Architectural Reviews
Software architecture consists of the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution. [ISO42010], [Bass03]. Checklists1 used for architecture reviews could, for example, include verification of the proper implementation of the following items, which are quoted from [Web-2]:
• “Connection pooling - reducing the execution time overhead associated with establishing database connections by establishing a shared pool of connections
• Load balancing – spreading the load evenly between a set of resources
• Distributed processing
• Caching – using a local copy of data to reduce access time
• Lazy instantiation
• Transaction concurrency
• Process isolation between Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP)
• Replication of data”
5.2.2 Code Reviews
Checklists for code reviews are necessarily very detailed, and, as with checklists for architecture reviews, are most useful when they are language, project and company-specific. The inclusion of code-level anti-patterns is helpful, particularly for less experienced software developers.
Checklists1 used for code reviews could include the following items:.
1. Structure
• Does the code completely and correctly implement the design?
• Does the code conform to any pertinent coding standards?
• Is the code well-structured, consistent in style, and consistently formatted?
• Are there any uncalled or unneeded procedures or any unreachable code?
• Are there any leftover stubs or test routines in the code?
• Can any code be replaced by calls to external reusable components or library functions?
• Are there any blocks of repeated code that could be condensed into a single procedure?
• Is storage use efficient?
• Are symbolics used rather than “magic number” constants or string constants?
• Are any modules excessively complex and should be restructured or split into multiple modules?
2. Documentation
• Is the code clearly and adequately documented with an easy-to-maintain commenting style?
• Are all comments consistent with the code?
• Does the documentation conform to applicable standards?
3. Variables
• Are all variables properly defined with meaningful, consistent, and clear names?
• Are there any redundant or unused variables?
4. Arithmetic Operations
• Does the code avoid comparing floating-point numbers for equality?
• Does the code systematically prevent rounding errors?
• Does the code avoid additions and subtractions on numbers with greatly different magnitudes?
• Are divisors tested for zero or noise?
5. Loops and Branches
• Are all loops, branches, and logic constructs complete, correct, and properly nested?
• Are the most common cases tested first in IF-ELSEIF chains?
• Are all cases covered in an IF-ELSEIF or CASE block, including ELSE or DEFAULT clauses?
• Does every case statement have a default?
• Are loop termination conditions obvious and invariably achievable?
• Are indices or subscripts properly initialized, just prior to the loop?
• Can any statements that are enclosed within loops be placed outside the loops?
• Does the code in the loop avoid manipulating the index variable or using it upon exit from the loop?
6. Defensive Programming
• Are indices, pointers, and subscripts tested against array, record, or file bounds?
• Are imported data and input arguments tested for validity and completeness?
• Are all output variables assigned?
• Is the correct data element operated on in each statement?
• Is every memory allocation released?
• Are timeouts or error traps used for external device access?
• Are files checked for existence before attempting to access them?
• Are all files and devices left in the correct state upon program termination?
6. Test Tools and Automation
Keywords capture/playback, data-driven testing, debugging, emulator, fault seeding, hyperlink, keyword-driven testing, performance efficiency, simulator, test execution, test management
Learning Objectives for Test Tools and Automation
6.1 Defining the Test Automation Project
TTA-6.1.1 (K2) Summarize the activities that the Technical Test Analyst performs when setting up a test automation project
TTA-6.1.2 (K2) Summarize the differences between data-driven and keyword-driven automation
TTA-6.1.3 (K2) Summarize common technical issues that cause automation projects to fail to achieve the planned return on investment
TTA-6.1.4 (K3) Construct keywords based on a given business process
6.2 Specific Test Tools
TTA-6.2.1 (K2) Summarize the purpose of tools for fault seeding and fault injection
TTA-6.2.2 (K2) Summarize the main characteristics and implementation issues for performance testing tools
TTA-6.2.3 (K2) Explain the general purpose of tools used for web-based testing
TTA-6.2.4 (K2) Explain how tools support the practice of model-based testing
TTA-6.2.5 (K2) Outline the purpose of tools used to support component testing and the build process
TTA-6.2.6 (K2) Outline the purpose of tools used to support mobile application testing
6.1 Defining the Test Automation Project
In order to be cost-effective, test tools (and particularly those which support test execution), must be carefully architected and designed. Implementing a test execution automation strategy without a solid architecture usually results in a tool set that is costly to maintain, insufficient for the purpose and unable to achieve the target return on investment.
A test automation project should be considered a software development project. This includes the need for architecture documentation, detailed design documentation, design and code reviews, component and component integration testing, as well as final system testing. Testing can be needlessly delayed or complicated when unstable or inaccurate test automation code is used.
There are multiple tasks that the Technical Test Analyst can perform regarding test execution automation. These include:
• Determining who will be responsible for the test execution (possibly in coordination with a Test Manager)
• Selecting the appropriate tool for the organization, timeline, skills of the team, and maintenance requirements (note this
could mean deciding to create a tool to use rather than acquiring one) • Defining the interface requirements between the automation tool and other tools such as the test management, defect management and tools used for continuous integration
• Developing any adapters which may be required to create an interface between the test execution tool and the software under test
• Selecting the automation approach, i.e., keyword-driven or data-driven (see Section 6.1.1 below)
• Working with the Test Manager to estimate the cost of the implementation, including training. In Agile projects this aspect would typically be discussed and agreed in project/sprint planning meetings with the whole team.
• Scheduling the automation project and allocating the time for maintenance
• Training the Test Analysts and Business Analysts to use and supply data for the automation
• Determining how and when the automated tests will be executed
• Determining how the automated test results will be combined with the manual test results
In projects with a strong emphasis on test automation, a Test Automation Engineer may be tasked with many of these activities (see the Advanced Level Test Automation Engineer syllabus [ISTQB_ALTAE_SYL] for details). Certain organizational tasks may be taken on by a Test Manager according to project needs and preferences. In Agile projects the assignment of these tasks to roles is typically more flexible and less formal.
These activities and the resulting decisions will influence the scalability and maintainability of the automation solution. Sufficient time must be spent researching the options, investigating available tools and technologies and understanding the future plans for the organization.
6.1.1 Selecting the Automation Approach
This section considers the following factors which impact the test automation approach:
• Automating through the GUI
• Applying a data-driven approach
• Applying a keyword-driven approach
• Handling software failures
• Considering system state
The Advanced Level Test Automation Engineer syllabus [ISTQB_ALTAE_SYL] includes further details on selecting an automation approach.

Advanced Level Technical Test Analyst
ASTQB Technical approach
Killexams : ASTQB Technical approach - BingNews Search results Killexams : ASTQB Technical approach - BingNews Killexams : Beyond Encryption: A Layered Approach to Cyberthreat Defense No result found, try new keyword!By taking a layered approach that combines critical software capabilities with proven best practices, you’ll stay ahead of the ever-changing threat landscape. The mainframe offers unrivaled data ... Sat, 15 Oct 2022 06:28:00 -0500 en text/html Killexams : Hands-on: Quest Pro Technical Analysis – What’s Promising & What’s Not

There’s a lot to talk about after our time with Quest Pro. In our prior article we talked about the experience using Meta’s new MR headset. Here we’ll get into the nitty gritty of the headset’s capabilities and performance.

Key Quest Pro Coverage:

Quest Pro Revealed – Full Specs, Price, & Release Date

Quest Pro Hands-on – The Dawn of the Mixed Reality Headset Era

Touch Pro Controllers Revealed – Also Compatible with Quest 2

As often happens with hands-on demos, I wasn’t able to sit down and really test everything I would have liked to about headset (on account of being walked through several demos in row), but I soaked up as much as I could about how it looked and felt to use the Quest Pro.

One of my biggest surprises about the headset is that the resolving power isn’t actually much better than Quest 2. That made sense once Meta revealed that Quest Pro shares nearly the same resolution as Quest 2. Granted, the company claims the lenses have greater clarity at the center and periphery, but in any case it isn’t the kind of leap that’s going to make the headset great for memorizing or using it like a computer monitor.

Photo by Road to VR

I take it this decision might have been related to the resolution of the passthrough cameras (not to mention the extra processing power required to drive the headset’s 10 on-board cameras). After all, if you had a super high resolution display but lower resolution cameras, the outside world would look blurry by contrast against the sharper virtual objects.

Speaking of passthrough… while Quest Pro finally gets a full-color view, it’s not quite perfect. Because of all the adjustments the headset is doing to render a geometrically correct passthrough view, the implementation ends up with a few artifacts that manifest as color fringing around certain objects—like a faint outline of missing color.

Photo by Road to VR

My best guess is this happens because a mono RGB camera is employed for color information which is then projected over top of a stereo view… necessarily there’s some small angles where the color information is simply not present. This didn’t defeat the purpose of passthrough AR by any means (nor the appreciation for finally seeing in color), but it was something that would be nice to see fixed in future headsets.

As for the lenses, there’s no doubt that they’ve managed to compact the optical stack while retaining essentially the same kind performance as Quest 2… or potentially better; Meta says Quest Pro has up to 75% better contrast and a 30% larger color gamut thanks to 500 local dimming elements in the backlight, though I haven’t gotten to put this to test just yet.

Photo by Road to VR

Similarly, the remove of Fresnel lenses should eliminate glare and god rays in theory, but I wasn’t able to pull up the right content to see if they’ve been replaced with order kinds of artifacts. One thing I did notice though is that the lenses can reflect ambient light if angled toward direct light sources… luckily the headset comes with peripheral blinders if you want to cut this down and be more immersed.

Quest Pro with ‘full’ light blocker () | Photo by Road to VR

Quest Pro isn’t just a big upgrade to the headset; the accompanying Touch Pro controllers have some interesting capabilities that I didn’t expect.

With essentially the same handle as before, they still feel great in the hand, maybe even better than my favorite Touch controller (the original Touch v1) thanks to a closer center of gravity and a nice weight from an on-board rechargeable battery and improved haptic engines.

Photo by Road to VR

The single biggest improvement to the controllers is surely the addition of on-board inside-out tracking. Not only does this remove the ring to make the controllers more compact and less likely to bump into each other, but now they can track anywhere around you, rather than going haywire if they leave sight of the headset’s cameras for too long. It’s early to say (and Meta has made no mention of it) but this could even open up the controllers to functioning like extra body trackers.

I didn’t get to put the controller tracking to the test with something demanding like Beat Saber, but until I can, I’m hoping Meta was smart enough to make sure these could hold up to the Quest platform’s most popular game.

The new capabilities on the Touch Pro controller are hit or miss for me so far.

First is the pinch sensor that allows you to push down on the thumb rest to register an input. Combined with squeezing the index finger, this creates a pretty natural pinch gesture. It feels a little novel, but I could see this being used as an easy way to emulate pinch inputs in hand-tracking apps without any need for the developers to make changes. The gesture also provides a clearer single point of interaction compared to pulling a trigger or pressing a button, both if which are often abstracted from the real position of your fingers.

Image courtesy Meta

As for the attachable stylus tip which goes on the bottom of your controller… I’m not really sold. Personally I find holding the controller upside down to use as a bulbous white-board marker to be fairly unergonomic. It’s a neat idea in theory—and I love that the stylus tip is pressure sensitive for added control—but I’m not sure the headset yet has the precision needed to really pull this off.

Photo by Road to VR

In the demos I saw that used the controller as a stylus, in both cases the virtual surface I was expected to draw on had drifted just far enough away from the physical surface it was supposed to represent that my stylus didn’t think it was close enough to start creating a mark… even though I was physically touching the controller to the physical surface.

That might be an implementation issue… after all, the pressure-sensitive tip should be able to clearly inform the system of when you are making contact and when you aren’t… but even so, once I recalibrated the surfaces and tried to draw again, I saw the surface drift fairly quickly (not by much, but even a centimeter of mismatch makes using a stylus feel weird). This might work fine for coarse annotations, like a shape here, or a few words there, but it’s far from something like a Wacom tablet.

Photo by Road to VR

As for the haptics… in my short time with them it seemed like there’s multiple haptic engines inside, making the controller capable of a broader range of haptic effects, but there wasn’t a moment where I felt particularly wowed by what I was feeling compared to what I’ve felt on Quest 2.

Granted, haptics are often one of the most underutilized forms of XR output, and often the last to be considered by developers given the difficultly of authoring haptic effects and the peculiarities of different haptic engines in different controllers. I hope this is something that will become a more obvious upgrade in the future as developers have more time to play with the system and find where to best employ its capabilities.

One last thing about the Touch Pro controllers… they’re also compatible with Quest 2 (unfortunately not Quest 1). Not only does this reduce the potential for fragmentation between different controller capabilities, but it means some of the new goodness of Quest Pro can come to Quest 2 users who don’t want to drop $1,500 on the complete package.

Image courtesy Meta

I definitely deliver credit to Meta here as a pro-customer move. Now if they really want my praise… it would be amazing if they made Touch Pro controllers compatible with any headset. In theory—because the controllers track their own position and don’t rely on unique LED patterns, or headset-based CV processing, etc—they should be able to simply report their own position to a host system which can integrate the information as needed. It’s a stretch, but it would be really great if Meta would offer all the great capabilities of the Touch Pro controllers to any headset out there that wanted to implement them, thus creating a larger ecosystem of users with matching controller capabilities.


Photo by Road to VR

Quest Pro is no doubt more compact and balanced than any headset Meta has made previously, but it’s also heavier at 722 grams to Quest 2’s 503 grams.

Granted, this is another instance where Meta’s decision to put a cheap strap on Quest 2 comes back to bite them. Despite not being able to say that Quest Pro is lighter, it might in fact be the more comfortable headset.

While ergonomics are really hard to get a grasp on without hours inside the headset, what’s clear immediately is that Quest Pro is more adjustable which is great. The headset has both a continuous IPD adjustment (supporting 55–75mm) and a continuous eye-relief adjustment. Not to mention that the on-board eye-tracking will tell you when you’ve got the lenses into the ideal position for your eyes. Ultimately this means more people will be able to dial into the best position for both visuals and comfort, and that’s always a good thing.

But, it has to be said, I have an issue with ‘halo’ headstraps generally. The forehead pad has a specific curve to it and thus wants to sit on your forehead in the spot that best matches that curve… but we all have somewhat different foreheads, which means that specific spot will be different from user to user. With no way to adjust the lenses up and down… you might have to pick between the ‘best looking’ and ‘most comfortable’ position for the headset.

I’ll have spend more time with Quest Pro to know how much this problem exists with the headset. And while I’d love to see other headstrap options as accessories, a halo-style headstrap might be a necessity for Quest Pro considering how much of the face the headset is attempting to track with internal cameras.

Tue, 11 Oct 2022 13:03:00 -0500 Ben Lang en-US text/html
Killexams : Modernization: An approach to what works

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.

With digital disruptors eating away at market share and profits hurting from prolonged, intensive cost wars between traditional competitors, businesses had been looking to reduce their cost-to-income ratios even before COVID-19. When the pandemic happened, the urgency hit a new high. On top of that came the scramble to digitize pervasively in order to survive.

But there was a problem. Legacy infrastructure, being cost-inefficient and inflexible, hindered both objectives. The need for technology modernization was never clearer. However, what wasn’t so clear was the path to this modernization.  

Should the enterprise rip up and replace the entire system or upgrade it in parts? Should the transformation go “big bang” or proceed incrementally, in phases? To what extent and to which type of cloud should they shift to? And so on.

The Infosys Modernization Radar 2022 addresses these and other questions. 


Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register Here

The state of the landscape

Currently, 88% of technology assets are legacy systems, half of which are business-critical. An additional concern is that many organizations lack the skills to adapt to the requirements of the digital era. This is why enterprises are rushing to modernize: The report found that 70% to 90% of the legacy estate will be modernized within five years.

Approaches to modernization

Different modernization approaches have different impacts. For example, non-invasive (or less invasive) approaches involve superficial changes to a few technology components and impact the enterprise in select pockets. These methods may be considered when the IT architecture is still acceptable, the system is not overly complex, and the interfaces and integration logic are adequate. Hence they entail less expenditure.

But since these approaches modernize minimally, they are only a stepping stone to a more comprehensive future initiative. Some examples of less and non-invasive modernization include migrating technology frameworks to the cloud, migrating to open-source application servers, and rehosting mainframes.

Invasive strategies modernize thoroughly, making a sizable impact on multiple stakeholders, application layers and processes. Because they involve big changes, like implementing a new package or re-engineering, they take more time and cost more money than non-invasive approaches and carry a higher risk of disruption, but also promise more value.

When an organization’s IT snarl starts to stifle growth, it should look at invasive modernization by way of re-architecting legacy applications to cloud-native infrastructure, migrating traditional relational database management systems to NoSQL-type systems, or simplifying app development and delivery with low-code/no-code platforms. 

The right choice question

From the above discussion, it is apparent that not all consequences of modernization are intentional or even desirable. So that brings us back to the earlier question: What is the best modernization strategy for an enterprise?

The truth is that there’s no single answer to this question because the choice of strategy depends on the organization’s context, resources, existing technology landscape, business objectives. However, if the goal is to minimize risk and business disruption, then some approaches are clearly better than others.

In the Infosys Modernization Radar 2022 report, 51% of respondents taking the big-bang approach frequently suffered high levels of disruption, compared to 21% of those who modernized incrementally in phases. This is because big-bang calls for completely rewriting enterprise core systems, an approach that has been very often likened to changing an aircraft engine mid-flight. 

Therefore big-bang modernization makes sense only when the applications are small and easily replaceable. But most transformations entail bigger changes, tilting the balance in favor of phased and coexistence approaches, which are less disruptive and support business continuity.

Slower but much steadier

Phased modernization progresses towards microservices architecture and could take the coexistence approach. As the name suggests, this entails the parallel runs of legacy and new systems until the entire modernization — of people, processes and technology — is complete. This requires new cloud locations for managing data transfers between old and new systems.

The modernized stack points to a new location with a routing façade, an abstraction that talks to both modernized and legacy systems. To embrace this path, organizations need to analyze applications in-depth and perform security checks to ensure risks don’t surface in the new architecture. 

Strategies such as the Infosys zero-disruption method frequently take the coexistence approach since it is suited to more invasive types of modernization. Planning the parallel operation of both old and new systems until IT infrastructure and applications make their transition is extremely critical.

The coexistence approach enables a complete transformation to make the application scalable, flexible, modular and decoupled, utilizing microservices architecture. A big advantage is that the coexistence method leverages the best cloud offerings and gives the organization access to a rich partner ecosystem. 

An example of zero-disruption modernization that I have led is the transformation of the point-of-sale systems of an insurer. More than 50,000 rules (business and UI) involving more than 10 million lines of code were transformed using micro-change management. This reduced ticket inventory by 70%, improved maintenance productivity by about 10% and shortened new policy rollout time by about 30%. 

Summing up

Technology modernization is imperative for meeting consumer expectations, lowering costs, increasing scalability and agility, and competing against nimble, innovative next-generation players. In other words, it is the ticket to future survival. 

There are many modernization approaches, and not all of them are equal. For example, the big-bang approach, while quick and sometimes even more affordable, carries a very significant risk of disruption. Since a single hour of critical system downtime could cost as much as $300,000, maintaining business continuity during transformation is a very big priority for enterprises.

The phased coexistence approach mitigates disruption to ensure a seamless and successful transformation. 

Gautam Khanna is the vice president and global head of the modernization practice at Infosys.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Wed, 05 Oct 2022 09:32:00 -0500 Gautam Khanna, Infosys en-US text/html
Killexams : Here’s how independent restaurants and small chains should approach technology

By now, building a tech stack is a given for restaurants of all sizes, but as we’ve previously stated, investing in technology does not take a one-size-fits-all approach. While national restaurant brands like Domino’s and Chipotle have become leaders in cutting-edge foodservice technology, how does a smaller brand compete with fewer resources? According to Peter Baghdassarian, co-owner of seven-unit Armenian kabob chain, Massis Kabob, you have to do your homework, know your company’s needs, and not be afraid of investing ahead of the curve.

Baghdassarian said that Massis Kabob, which has been serving Mediterranean food in California mall food courts since 1976, was one of the first restaurants to invest in digital video menu boards around 17 years ago.

“This was a huge deal at the time, and we had to partner with a company in Taiwan to make them because it was so hard to get them here,” he said. “The menu boards increased our business because our food was so hard to explain through regular menu displays.”

When Baghdassarian’s father first opened Massis Kabob, most customers were unfamiliar with Mediterranean food and had never eaten kabobs, Baghdassarian said, so eventually the digital menu boards helped to explain the menu of shish kabobs, pita wraps, and combo plates. This is exactly how Baghdassarian approaches all aspects of food service technology: will this make my employees’ lives easier? Is it solving a problem? If not, it goes in the garbage, he said.

Currently, Massis Kabob has a very simple tech stack: they use Toast for POS and third-party delivery integration, Incentivio for building out and operating their loyalty app, and are currently looking for a new partner for scheduling software.

“When we wanted to swap out our POS system, we did not have a full-time IT guy to help us like a big chain would,” Baghdassarian said. “If the tech requires a mouse and a computer in an office, my manager is not going to use it. We don’t have that kind of setup. It has to be on a phone or tablet and has to be very intuitive.”

While it might sound like Baghdassarian is skeptical of a lot of technology, he just knows exactly what he wants and what would work for a counter-service kebab restaurant. Massis Kabob just opened a new flagship location last month in Glendale. The 3,500-square-foot-store is the largest of the chain’s seven locations and can accommodate on-premises patrons and lanes for pickup and delivery. Unlike many of Massis Kabob’s larger restaurant industry colleagues, however, the off-premises-focused location does not have a drive-thru lane because “that’s just not practical in Los Angeles.”

As the restaurant industry in general has become more off-premises-focused and people discover and interact restaurants increasingly through apps, Baghdassarian said it has been an adjustment for their brand. “We’re not flipping burgers,” he said, adding that it takes more effort and exact timing to make kabob orders from scratch, and it’s not a continuous process like other types of quick-service would be.

“We tell people their food will be ready in 12 minutes, because otherwise it will be sitting and getting cold,” Baghdassarian said. “We call our customers personally when their food is ready for pick up, and even though it’s one more step, it lets us do more quality control than our competitors.”  

Of course, this process is much easier now with the launch of Massis Kabob’s new app which became available to get this year and allows them to have a one-on-one relationship with customers through crucial customer data.

“We’re glad to be working [with Incentivio] because I avoid having to hire four guys to sit around in an office doing data analysis,” Baghdassarian said. “McDonald’s might have a group of data scientists doing custom loyalty stuff for them, but I’m not going to have the time or half a million dollars to spend on that.”

As for Baghdassarian’s final bits of advice for investing in tech as a smaller restaurant brand? Don’t necessarily go for the biggest companies in technology just because you recognize the names, make sure the technology is user-friendly, and take advantage of your more compact size to add more personal touches:

“One time we had a tech failure on the part of a third-party delivery company and a customer’s food didn’t get picked up by a driver,” he said. “We looked him up in the database and noted that he was a regular customer and had spent thousands of dollars with us. So, I picked up the bag of food and drove 20 minutes to hand-deliver it myself.”

Contact Joanna at [email protected]

Find her on Twitter: @JoannaFantozzi

Mon, 10 Oct 2022 07:52:00 -0500 en text/html
Killexams : Adopting An Open Approach To Modernize IT

Rajat Bhargava is an entrepreneur, investor, author and currently CEO and cofounder of JumpCloud.

From the 1980s until the mid-2000s, the monoculture around Microsoft ruled. Users logged into Windows-managed computers and used Office and Windows File Server; businesses relied on Microsoft Active Directory (AD) to manage user identity and access.

Then, IT evolved. On-premises environments and closed systems gave way to the flexibility of the cloud. Organizations adopted Mac- and Linux-based systems. Software as a service (SaaS) environments exploded. Data centers started to be replaced by infrastructure as a service (IaaS) providers. Now, Gartner predicts that over 95% of new digital workloads will be deployed on cloud-native platforms by 2025, a dramatic increase from 30% in 2021.

With cloud servers preferred for data processing and storage, web applications now dominate the market. In part because wired connections gave way to wireless networks and people became more mobile through smartphones, and Google Workspace (aka G Suite, Google Apps) and M365 (aka Office 365) became as popular as machine-based Office applications in the enterprise space.

In this environment, organizations can’t be bound to anachronistic approaches as businesses shift to the cloud and globally distributed workforces. Now’s the time for companies—especially small and medium-sized enterprises (SMEs)—to approach IT with an open mind and an open approach.

“Open” in this context doesn’t mean porous or loose; it represents scalability, flexibility and agility in terms of changes in technology and developments in the stack. An open approach improves end user experience, worker productivity and satisfaction. An open approach to IT can be a critical tool in helping organizations establish zero-trust security without sacrificing the agility and flexibility made possible by the cloud.

In this article, I’ll offer some tips to getting started with this approach.

Open Identity

Modernizing IT stacks means making sure that work—remote and hybrid—functions well. Employees care about doing their job; they want easy access to the resources they need. IT teams want a similarly streamlined experience and assurance that company data remains secure without impacting productivity. My company’s survey of 506 SME IT admins found that nearly 75% prefer a single solution to manage employee identities, access and devices than having to manage a number of different solutions. An open directory platform approach incorporates a cloud-hosted “virtual” domain that meets this need, offering the flexibility and security necessary to support modern workplaces.

This means creating an IT environment that consumes identities wherever they live. Not just employee identities but also device identities, allowing your system to be open to receive information from authorized sources anywhere. On the outgoing side, it means creating a single source of user identity that can be propagated out to other devices, other users or to an authorized network.

Identity as a service and cloud directories are vital tools that enable an open approach. Look for those that offer fluidity and the flexibility to change resources any time (for example, from M365 to Google Workspace or vice versa).

Flexible Security Layers

Instead of traditional perimeters, an open approach favors a creation of virtual offices and security perimeters around each employee—and whatever devices they use. Being open doesn’t equate to a cavalier security approach; it’s a way to offer authorized access to resources anywhere that is convenient and tracked for compliance and overall visibility.

Security layers can evolve with each organization’s need and should include:

Identity layer: A cloud directory houses authentication credentials and establishes centralized access control across user identity, admin access, service accounts and machines. Centering identity within a cloud directory allows SME teams to draw a security perimeter around each employee, enabling updates without disruption and providing access to on-prem and cloud-based resources.

Device layer: Most IT environments operate within an ever-evolving state of company-issued, personal and mobile devices running some combination of Mac, Windows or Linux systems. In this complicated device ecosystem, organizations should extend user identity to establish device trust, meaning that a device is known and its user is verified. A mobile device management solution (MDM) is one option that can install a remote agent to handle basics—including multifactor authentication (MFA) and permissions—zero-touch onboarding and remote lock, restart or wipe. Determine the control level you need in your device environment, factoring in options like how you honor employee device choice and how you manage your bring your own device (BYOD) policy.

IT resource layer: In office environments, employees generally use a form of single sign-on (SSO) to log into their desktop at designated workstations and then get instant access to applications and shared files and servers. In remote, hybrid and other modern IT environments, SSO should include everything from SaaS apps to systems, files, infrastructure and shared networks. Some organizations use SSO solely for web-based applications, while some centralize identity and extend it to virtually any IT resource through authentication protocols like LDAP, SAML, OpenID Connect, SSH, RADIUS and REST.

Open Insights

Given security, ongoing monitoring and compliance needs, visibility is critical to an open IT approach. Considering the breadth of access transactions, businesses should look for a holistic solution with broad coverage.

Basic event logging data is table stakes, and IT solutions should include a method for capturing discrete and unique log formats. That includes logs from SSO and from cloud RADIUS for network connection, LDAP and device connections—any log format for resources deployed in your stack.

Because integration requirements make log analysis and management solutions expensive, challenging to implement and difficult for admins managing custom feeds for authentication protocols, consider options that offer a wide range of analysis by enriching raw data. This can be done with a number of other data points, sessionizing the data through post-processing. Such information provides admins with broad insight across their entire IT environment, not just into a particular service or user.

For many organizations, extending closed legacy systems was a necessity. In the age of hybrid and remote work, it’s proving more of a liability than an asset. An open approach allows companies to embrace a diverse, modern IT environment that can keep pace with what users need, keeping them and company data secure at every access point.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Thu, 06 Oct 2022 12:00:00 -0500 Rajat Bhargava en text/html
Killexams : Controversial approach aims to expand plastics recycling

Major chemical companies are backing pyrolysis plants that convert plastic waste into hydrocarbon feedstocks that can be turned into plastics again. The process uses high temperatures in the absence of oxygen to break down plastics into a mixture of smaller molecules known as pyrolysis oil. But the practice has its critics, according to a cover story in Chemical & Engineering News.

Proponents of pyrolysis argue that the process can make up for the shortcomings of traditional recycling, which captures only about 9% of plastics in the U.S., according to the U.S. Environmental Protection Agency. But are not yet convinced, and a growing number of jurisdictions, such as California, don't consider pyrolysis recycling at all, writes Senior Editor Alex Tullo. Critics say that pyrolysis facilities can't actually accept the mixed that they claim to, as residual contaminants gum up the process too much. A second charge is that pyrolysis is really just incineration. Another concern is scale. Pyrolysis and other forms of chemical recycling have roughly 120,000 t of capacity currently onstream in the U.S.—a miniscule fraction of the 56 million t of overall plastics production in North America in 2021.

Industry executives say they are more committed than ever to recycling and are eager to practice pyrolysis at large scale. Firms are building facilities that are bigger than before to increase capacity. Many companies are attempting to take in more mixed waste, with approaches such as using catalysts and adsorbents to filter out and eliminate the most reactive compounds from the feedstock stream. And interest in pyrolysis is taking off, with petrochemical companies building infrastructure to process the products of pyrolysis plants and large engineering companies licensing technology to third parties that want to get into the business. How the technology works in the will go a long way to determining the public's perception of the plastics industry.

More information: Amid controversy, industry goes all in on plastics pyrolysis, Chemical & Engineering News (2022). … cs-pyrolysis/100/i36

Citation: Controversial approach aims to expand plastics recycling (2022, October 12) retrieved 17 October 2022 from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Tue, 11 Oct 2022 12:00:00 -0500 en text/html
Killexams : When endings approach, people choose the familiar over the novel

When people believe that a door is closing—that they have a limited amount of time left to enjoy something, such as dining out or traveling—they gravitate to the comfort of something familiar rather than the excitement of something new, according to research published by the American Psychological Association.

In eight experiments with nearly 6,000 total participants, researchers explored whether people tend to prefer novel, exciting experiences, such as trying a new restaurant, or familiar ones, such as returning to an old favorite—and whether those preferences shift with the amount of time people believe that they have left to enjoy similar experiences.

The research was published in the Journal of Personality and Social Psychology.

Previous research has found that, on average, people tend to opt for novel and exciting experiences over familiar ones. They would rather enjoy a new movie than rewatch something they've already seen, for example, given equal access to both. However, study authors Ed O'Brien, Ph.D., and Yuji Katsumata Winet, of the University of Chicago Booth School of Business, suspected that "perceived endings" might affect those choices by nudging people to return to a meaningful old favorite.

In the first experiment, the researchers asked 500 online participants and 663 college and business school students to read hypothetical scenarios in which they were given the choice between a new experience or a familiar, beloved one—such as memorizing a new novel versus rereading an old favorite, or visiting a new city versus revisiting a city they loved.

Half the participants were simply asked to make the choice, while the other half were instructed to imagine that it was the last chance that they would have for a while to travel or read a novel. Overall, across all the situations, participants in the "endings" groups were more likely to choose familiar activities compared with participants in the control groups.

In the next set of experiments, the researchers moved beyond hypothetical questions to explore people's behavior in lab and real-life settings. In one, for example, participants were told they would be given a gift card to a restaurant and that the gift card needed to be used in the next month.

Then, half the participants were told to reflect on how few opportunities they would have for going to restaurants in the next month and specific things that might prevent them from going to restaurants. Finally, participants were asked whether they would prefer a to a restaurant they'd visited before or one that was new to them. Overall, 67% of the participants in the "endings" condition preferred a gift certificate to a familiar , compared with just 48% of those in the control condition.

Finally, the researchers explored why perceived endings seemed to push participants toward familiar things. They found evidence that it was not simply because the familiar experiences were a safe bet that participants knew they would enjoy, but also because they were more likely to find those familiar things personally meaningful.

"Our findings unveil nuance to what people really mean by ending on a high note," said Winet. "Endings tend to prompt people to think about what's personally meaningful to them. People like ending things on a meaningful note as it provides psychological closure, and in most cases old favorites tend to be more meaningful than exciting novelty."

"The research is especially interesting because, on the surface, it runs counter to the idea of the bucket list, whereby people tend to pursue novelty—things they've never done but have always wanted to do—as they approach the ," O'Brien said. "Here we find that, at least in these more everyday ending contexts, people actually do the opposite. They want to end on a high note by ending on a familiar note."

The researchers noted that the findings could help people better structure their time to maximize their enjoyment of experiences, for example by visiting an old favorite attraction on the last rather than the first day of a vacation. Retailers and marketers, too, could take advantage—a café slated to close for renovations might put more of its favorite dishes on the menu rather than try new items for sale.

And perhaps, according to the researchers, such psychological framings could be useful for addressing larger societal problems. "Nudging people toward repeat consumption by emphasizing endings and last chances could subtly encourage sustainable consumption by curbing the waste that necessarily accumulates from perpetual novelty-seeking," Winet said.

More information: Yuji K. Winet et al, Ending on a Familiar Note: Perceived Endings Motivate Repeat Consumption, Journal of Personality and Social Psychology (2022). DOI: 10.1037/pspa0000321

Citation: When endings approach, people choose the familiar over the novel (2022, October 6) retrieved 17 October 2022 from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Thu, 06 Oct 2022 02:03:00 -0500 en text/html
Killexams : Labskin to develop new revolutionary approach to testing for radiation exposure

Labskin is delighted to announce our selection by The Intelligence Advanced Research Projects Activity (IARPA), part of the US Office of the Director of National Intelligence, to join a team of experts to develop new ways to evaluate radiation exposure in civilians and military personnel.

Labskin to develop new revolutionary approach to testing for radiation exposure

Image Credit: Labskin

Labskin is a key member of a consortium selected to develop these technologies in collaboration with a multidisciplinary team of experts including professors from the University of Columbia in New York, Georgetown University in Washington DC, the Georgia Institute of Technology, scientists from the American Type Culture Collection (ATCC) and computer scientists and researchers from the project lead ARETE Associates, a Defense contractor specializing in sensing solution and machine learning algorithms.

In a project worth $800k, starting immediately, Labskin will help develop this technology into minimally invasive testing for radiation for a program known as Targeted Evaluation of Ionizing Radiation Exposure (TEI-REX). TEI-REX aims to develop novel approaches to evaluate organisms exposed to low-dose ionizing radiation. Labskin coupled with Skin Trust Club’s expertise in skin research and microbiology is essential for the project.

The goal of the project is to develop a new biodosimetry standard which could be applied to maintain the safety of military and civilian populations working or living in close proximity to ionizing radiation sources, such as: nuclear plants, nuclear vessels, ammunition, etc. Labskin’s contribution is the creation of a simple non-invasive swab test to collect signatures from the skin surface that allows machine learning algorithms to detect and quantify the impact of any amount of radiation exposure on the skin microbiome.

This is an unique opportunity to revolutionize the way we test for radiation exposure. Labskin and Skin Trust Club are at the forefront of an increasing number of cutting edge technologies that are changing our world. This technique can also be applied to detect the impact of pollution or a variety of chemicals on the environment. Furthermore, this type of testing could not only be used to detect exposure to these kind of events in humans but also in complex ecological systems such as the soil, crops or sediments”

David Caballero-Lima, Chief Scientist, Labskin

We are committed to the success of this very exciting project. The inclusion of Artificial Intelligence and the opportunity to work with ARETE Associates, with their vast experience in complex AI applications, will result in further advances in how AI can be used in conjunction with our skin model at scale. This project coincides with completion of the expansion of our US labs in Delaware, which will greatly help the implementation of this large project. We believe our proven ability to transition technology to the field with Skin Trust Club will be invaluable as we progress this project.”

Colin O’Sullivan, Chief Information Officer, Labskin

Wed, 05 Oct 2022 16:42:00 -0500 en text/html
Killexams : Fairlawn takes pro-active approach to crack down on catalytic converter thefts

Oct 07, 2022, 12:30amUpdated on Oct 07, 2022

By: News 12 Staff

Fairlawn is taking a pro-active approach to crack down on catalytic converter thefts as the state continues to see an uptick.

Fairlawn officials say they are etching serial numbers onto the part for free.

The catalytic converters are often stolen because they contain precious metals.

Officials say scrap yards won't take them if they have serial numbers on them.

Thu, 06 Oct 2022 12:32:00 -0500 text/html
Killexams : Barry Promises More Aggressive Approach in Secondary Killexams : Packers Promise More Aggressive Approach in Secondary - Sports Illustrated Green Bay Packers News, Analysis and More Skip to main content Sat, 15 Oct 2022 15:52:00 -0500 Bill Huber en text/html
ATTA exam dump and training guide direct download
Training Exams List