Latest Contents of 000-839 in PDF Questions questions bank

killexams.com 000-839 Exam PDF holds Complete Pool of legit Questions and Answers and 000-839 PDF Questions checked, up-to-dated and certified including references and explanations (where pertinent). Our main concern is to collect the Questions and Answers is not just to pass the exam at first try yet Really Improve Your Knowledge and experience about the 000-839 exam points.

Exam Code: 000-839 Practice test 2022 by Killexams.com team
Rational Unified Process v7.0
IBM Rational resources
Killexams : IBM Rational resources - BingNews https://killexams.com/pass4sure/exam-detail/000-839 Search results Killexams : IBM Rational resources - BingNews https://killexams.com/pass4sure/exam-detail/000-839 https://killexams.com/exam_list/IBM Killexams : Biden wants an industrial renaissance. He can’t do it without immigration reform.

JOHNSTOWN, Ohio — Just 15 minutes outside of downtown Columbus, the suburbs abruptly evaporate. Past a bizarre mix of soybean fields, sprawling office parks and lonely clapboard churches is a field where the Biden administration — with help from one of the world’s largest tech companies — hopes to turn the U.S. into a hub of microchip manufacturing.

In his State of the Union address in March, President Joe Biden called this 1,000-acre spread of corn stalks and farmhouses a “field of dreams.” Within three years, it will house two Intel-operated chip facilities together worth $20 billion — and Intel is promising to invest $80 billion more now that Washington has sweetened the deal with subsidies. It’s all part of a nationwide effort to head off another microchip shortage, shore up the free world’s advanced industrial base in the face of a rising China and claw back thousands of high-end manufacturing jobs from Asia.

But even as Biden signs into law more than $52 billion in “incentives” designed to lure chipmakers to the U.S., an unusual alliance of industry lobbyists, hard-core China hawks and science advocates says the president’s dream lacks a key ingredient — a small yet critical core of high-skilled workers. It’s a politically troubling irony: To achieve the long-sought goal of returning high-end manufacturing to the United States, the country must, paradoxically, attract more foreign workers.

“For high-tech industry in general — which of course, includes the chip industry — the workforce is a huge problem,” said Julia Phillips, a member of the National Science Board. “It’s almost a perfect storm.”

From electrical engineering to computer science, the U.S. currently does not produce enough doctorates and master’s degrees in the science, technology, engineering and math fields who can go on to work in U.S.-based microchip plants. Decades of declining investments in STEM education means the U.S. now produces fewer native-born recipients of advanced STEM degrees than most of its international rivals.

Foreign nationals, including many educated in the U.S., have traditionally filled that gap. But a bewildering and anachronistic immigration system, historic backlogs in visa processing and rising anti-immigrant sentiment have combined to choke off the flow of foreign STEM talent precisely when a fresh surge is needed.

Powerful members of both parties have diagnosed the problem and floated potential fixes. But they have so far been stymied by the politics of immigration, where a handful of lawmakers stand in the way of reforms few are willing to risk their careers to achieve. With a short window to attract global chip companies already starting to close, a growing chorus is warning Congress they’re running out of time.

“These semiconductor investments won’t pay off if Congress doesn’t fix the talent bottleneck,” said Jeremy Neufeld, a senior immigration fellow at the Institute for Progress think tank.

Given the hot-button nature of immigration fights, the chip industry has typically been hesitant to advocate directly for reform. But as they pump billions of dollars into U.S. projects and contemplate far more expensive plans, a sense of urgency is starting to outweigh that reluctance.

“We are seeing greater and greater numbers of our employees waiting longer and longer for green cards,” said David Shahoulian, Intel’s head of workforce policy. “At some point it will become even more difficult to attract and retain folks. That will be a problem for us; it will be a problem for the rest of the tech industry.”

“At some point, you’ll just see more offshoring of these types of positions,” Shahoulian said.

A Booming Technology

Microchips (often called “semiconductors” by wonkier types) aren’t anything new. Since the 1960s, scientists — working first for the U.S. government and later for private industry — have tacked transistors onto wafers of silicon or other semiconducting materials to produce computer circuits. What has changed is the power and ubiquity of these chips.

The number of transistors researchers can fit on a chip roughly doubles every two years, a phenomenon known as Moore’s Law. In exact years, that has led to absurdly powerful chips bristling with transistors — IBM’s latest chip packs them at two-nanometer intervals into a space roughly the size of a fingernail. Two nanometers is thinner than a strand of human DNA, or about how long a fingernail grows in two seconds.

A rapid boost in processing power stuffed into ever-smaller packages led to the information technology boom of the 1990s. And things have only accelerated since — microchips remain the primary driver of advances in smartphones and missiles, but they’re also increasingly integrated into household appliances like toaster ovens, thermostats and toilets. Even the most inexpensive cars on the market now contain hundreds of microchips, and electric or luxury vehicles are loaded with thousands.

It all adds up to a commodity widely viewed as the bedrock of the new digital economy. Like fossil fuels before them, any country that controls the production of chips possesses key advantages on the global stage.

Until fairly recently, the U.S. was one of those countries. But while chips are still largely designed in America, its capacity to produce them has declined precipitously. Only 12 percent of the world’s microchip production takes place in the U.S., down from 37 percent in 1990. That percentage declines further when you exclude “legacy” chips with wider spaces between transistors — the vast majority of bleeding-edge chips are manufactured in Taiwan, and most factories not found on that island reside in Asian nations like South Korea, China and Japan.

For a long time, few in Washington panic about America’s flagging chip production. Manufacturing in the U.S. is expensive, and offshoring production to Asia while keeping R&D stateside was a good way to cut costs.

Two things changed that calculus: the Covid-19 pandemic and rising tensions between the U.S. and China.

Abrupt work stoppages sparked by viral spread in Asia sent shockwaves through finely tuned global supply chains. The flow of microchips ceased almost overnight, and then struggled to restart under new Covid surges and ill-timed extreme weather events. Combined with a spike in demand for microelectronics (sparked by generous government payouts to citizens stuck at home), the manufacturing stutter kicked off a chip shortage from which the world is still recovering.

Even before the pandemic, growing animosity between Washington and Beijing caused officials to question the wisdom of ceding chip production to Asia. China’s increasingly bellicose threats against Taiwan caused some to conjure up nightmare scenarios of an invasion or blockade that would sever the West from its supply of chips. The Chinese government was also pouring billions of dollars into a crash program to boost its own lackluster chip industry, prompting fears that America’s top foreign adversary could one day corner the market.

By 2020 the wheels had begun to turn on Capitol Hill. In January 2021, lawmakers passed as part of their annual defense bill the CHIPS for America Act, legislation authorizing federal payouts for chip manufacturers. But they then struggled to finance those subsidies. Although they quickly settled on more than $52 billion for chip manufacturing and research, lawmakers had trouble decoupling those sweeteners from sprawling anti-China “competitiveness” bills that stalled for over a year.

But those subsidies, as well as new tax credits for the chip industry, were finally sent to Biden’s desk in late July. Intel isn’t the only company that’s promised to supercharge U.S. projects once that money comes through — Samsung, for example, is suggesting it will expand its new $17 billion chip plant outside of Austin, Texas, to a nearly $200 billion investment. Lawmakers are already touting the subsidies as a key step toward an American renaissance in high-tech manufacturing.

Quietly, however, many of those same lawmakers — along with industry lobbyists and national security experts — fear all the chip subsidies in the world will fall flat without enough high-skilled STEM workers. And they accuse Congress of failing to seize multiple opportunities to address the problem.

STEM help wanted

In Columbus, just miles from the Johnstown field where Intel is breaking ground, most officials don’t mince words: The tech workers needed to staff two microchip factories, let alone eight, don’t exist in the region at the levels needed.

“We’re going to need a STEM workforce,” admitted Jon Husted, Ohio’s Republican lieutenant governor.

But Husted and others say they’re optimistic the network of higher ed institutions spread across Columbus — including Ohio State University and Columbus State Community College — can beef up the region’s workforce fast.

“I feel like we’re built for this,” said David Harrison, president of Columbus State Community College. He highlighted the repeated refrain from Intel officials that 70 percent of the 3,000 jobs needed to fill the first two factories will be “technician-level” jobs requiring two-year associate degrees. “These are our jobs,” Harrison said.

Harrison is anxious, however, over how quickly he and other leaders in higher ed are expected to convince thousands of students to sign up for the required STEM courses and join Intel after graduation. The first two factories are slated to be fully operational within three years, and will need significant numbers of workers well before then. He said his university still lacks the requisite infrastructure for instruction on chip manufacturing — “we’re missing some wafer processing, clean rooms, those kinds of things” — and explained that funding recently provided by Intel and the National Science Foundation won’t be enough. Columbus State will need more support from Washington.

“I don’t know that there’s a great Plan B right now,” said Harrison, adding that the new facilities will run into “the tens of millions.”

A lack of native STEM talent isn’t unique to the Columbus area. Across the country, particularly in regions where the chip industry is planning to relocate, officials are fretting over a perceived lack of skilled technicians. In February, the Taiwanese Semiconductor Manufacturing Corporation cited a shortage of skilled workers when announcing a six-month delay in the move-in date for their new plant in Arizona.

“Whether it’s a licensure program, a two-year program or a Ph.D., at all levels, there is a shortfall in high-tech STEM talent,” said Phillips. The NSB member highlighted the “missing millions of people that are not going into STEM fields — that basically are shut out, even beginning in K-12, because they’re not exposed in a way that attracts them to the field.”

Industry groups, like the National Association of Manufacturers, have long argued a two-pronged approach is necessary when it comes to staffing the high-tech sector: Reevaluating immigration policy while also investing heavily in workforce development

The abandoned House and Senate competitiveness bills both included provisions that would have enhanced federal support for STEM education and training. Among other things, the House bill would have expanded Pell Grant eligibility to students pursuing career-training programs.

“We have for decades incentivized degree attainment and not necessarily skills attainment,” said Robyn Boerstling, NAM’s vice president of infrastructure, innovation and human resources policy. “There are manufacturing jobs today that could be filled with six weeks of training, or six months, or six years; we need all of the above.”

But those provisions were scrapped, after Senate leadership decided a conference between the two chambers on the bills was too unwieldy to reach agreement before the August recess.

Katie Spiker, managing director of government affairs at National Skills Coalition, said the abandoned Pell Grant expansion shows Congress “has not responded to worker needs in the way that we need them to.” Amid criticisms that the existing workforce development system is unwieldy and ineffective, the decision to scrap new upgrades is a continuation of a trend of disinvesting in workers who hope to obtain the skills they need to meet employer demand.

“And it becomes an issue that only compounds itself over time,” Spiker said. “As technology changes, people need to change and evolve their skills.”

“If we’re not getting people skilled up now, then we won’t have people that are going to be able to evolve and skill up into the next generation of manufacturing that we’ll do five years from now.”

Congress finally sent the smaller Chips and Science Act — which includes the chip subsidies and tax credits, $200 million to develop a microchip workforce and a slate of R&D provisions — to the president’s desk in late July. The bill is expected to enhance the domestic STEM pool (at least on the margins). But it likely falls short of the generational investments many believe are needed.

“You could make some dent in it in six years,” said Phillips. “But if you really want to solve the problem, it’s closer to a 20-year investment. And the ability of this country to invest in anything for 20 years is not phenomenal.”

Immigration Arms Race

The microchip industry is in the midst of a global reshuffling that’s expected to last a better part of the decade — and the U.S. isn’t the only country rolling out the red carpet. Europe, Canada, Japan and other regions are also panic about their security, and preparing sweeteners for microchip firms to set up shop in their borders. Cobbling together an effective STEM workforce in a short time frame will be key to persuading companies to choose America instead.

That will be challenging at the technician level, which represents around 70 percent of workers in most microchip factories. But those jobs require only two-year degrees — and over a six-year period, it’s possible a sustained education and recruitment effort can produce enough STEM workers to at least keep the lights on.

It’s a different story entirely for Ph.D.s and master’s degrees, which take much longer to earn and which industry reps say make up a smaller but crucial component of a factory’s workforce.

Gabriela González, Intel’s head of global STEM research, policy and initiatives, said about 15 percent of factory workers must have doctorates or master’s degrees in fields such as material and electrical engineering, computer science, physics and chemistry. Students coming out of American universities with those degrees are largely foreign nationals — and increasingly, they’re graduating without an immigration status that lets them work in the U.S., and with no clear pathway to achieving that status.

A National Science Board estimate from earlier this year shows a steadily rising proportion of foreign-born students with advanced STEM skills. That’s especially true for degrees crucial to the chip industry — nearly 60 percent of computer science Ph.D.s are foreign born, as are more than 50 percent of engineering doctorates.

“We are absolutely reliant on being able to hire foreign nationals to fill those needs,” said Intel’s Shahoulian. Like many in the chip industry, Shaoulian contends there simply aren’t enough high-skilled STEM professionals with legal status to simultaneously serve America’s existing tech giants and an influx of microchip firms.

Some academics, such as Howard University’s Ron Hira, suggest the shortage of workers with STEM degrees is overblown, and industry simply seeks to import cheaper, foreign-born labor. But that view contrasts with those held by policymakers on Capitol Hill or people in the scientific and research communities. In a report published in late July by the Government Accountability Office, all 17 of the experts surveyed agreed the lack of a high-skilled STEM workforce was a barrier to new microchip projects in the U.S. — and most said some type of immigration reform would be needed.

Many, if not most, of the foreign nationals earning advanced STEM degrees from U.S. universities would prefer to stay and work in the country. But America’s immigration system is turning away these workers in record numbers — and at the worst possible time.

Ravi (not his real name, given his tenuous immigration status) is an Indian national. Nearly three years ago, he graduated from a STEM master’s program at a prestigious eastern university before moving to California to work as a design verification lead at an international chip company. He’s applied three times for an H-1B visa, a high-skilled immigration program used extensively by U.S. tech companies. But those visas are apportioned via a lottery, and Ravi lost each time. His current visa only allows him to work through the end of year — so Ravi is giving up and moving to Canada, where he’s agreed to take a job with another chip company. Given his skill set, he expects to quickly receive permanent legal status.

“The application process is incredibly simple there,” said Ravi, noting that Canadian officials were apologetic over their brief 12-week processing time (they’re swamped by refugee applications, he said).

If given the choice, Ravi said he would’ve probably stayed in California. But his story now serves as a cautionary tale for his younger brother back home. “Once he sort of completed his undergrad back in India, he did mention that he is looking at more immigration-friendly countries,” Ravi said. “He’s giving Canada more thought, at this point, than the United States.”

Ravi’s story is far from unique, particularly for Indian nationals. The U.S. imposes annual per-country caps on green cards — and between a yearly crush of applicants and a persistent processing backlog, Indians (regardless of their education or skill level) can expect to wait as long as 80 years for permanent legal status. A report released earlier this year by the libertarian Cato Institute found more than 1.4 million skilled immigrants are now stuck in green card backlogs, just a slight drop from 2020’s all-time high of more than 1.5 million.

The third rail of U.S. politics

The chip industry has shared its anxiety over America’s slipping STEM workforce with Washington, repeatedly asking Congress to make it easier for high-skilled talent to stay. But unlike their lobbying for subsidies and tax breaks — which has gotten downright pushy at times — they’ve done so very quietly. While chip lobbyists have spent months telling anyone who will listen why the $52 billion in financial incentives are a “strategic imperative,” they’ve only recently been willing to discuss their immigration concerns on the record.

In late July, nine major chip companies planned to send an open letter to congressional leadership warning that the shortage of high-skilled STEM workers “has truly never been more acute” and urging lawmakers to “enact much-needed green card reforms.” But the letter was pulled at the last minute, after some companies panic about wading into a tense immigration debate at the wrong time.

Leaders in the national security community have been less shy. In May, more than four dozen former officials sent a leader to congressional leadership urging them to shore up America’s slipping immigration edge before Chinese technology leapfrogs ours. “With the world’s best STEM talent on its side, it will be very hard for America to lose,” they wrote. “Without it, it will be very hard for America to win.”

The former officials exhorted lawmakers to take up and pass provisions in the House competitiveness bill that would’ve lifted green card caps for foreign nationals with STEM Ph.D.s or master’s degrees. It’d be a relatively small number of people — a February study from Georgetown University’s Center for Security and Emerging Technology suggested the chip industry would only need around 3,500 foreign-born workers to effectively staff new U.S.-based factories.

“This is such a small pool of people that there’s already an artificial cap on it,” said Klon Kitchen, a senior fellow focused on technology and national security at the conservative American Enterprise Institute.

Kitchen suggested the Republican Party’s wariness toward immigration shouldn’t apply to these high-skilled workers, and some elected Republicans agree. Sen. John Cornyn, whose state of Texas is poised to gain from the expansion of chip plants outside Austin, took up the torch — and almost immediately got burned.

Sen. Chuck Grassley, Iowa’s senior Republican senator, blocked repeated attempts by Cornyn, Democrats and others to include the green card provision in the final competitiveness package. Finding relief for a small slice of the immigrant community, Grassley reasoned, “weakens the possibility to get comprehensive immigration reform down the road.” He refused to budge even after Biden administration officials warned him of the national security consequences in a classified June 16 briefing, which was convened specifically for him. The effort has been left for dead (though a push to shoehorn a related provision into the year-end defense bill is ongoing).

Many of Grassley’s erstwhile allies are frustrated with his approach. “We’ve been talking about comprehensive immigration reform for how many decades?” asked Kitchen, who said he’s “not inclined” to let America’s security concerns “tread water in the background” while Congress does nothing to advance broader immigration bills.

Most Republicans in Congress agree with Kitchen. But so far it’s Cornyn, not Grassley, who’s paid a price. After helping broker a deal on gun control legislation in June, Cornyn was attacked by Breitbart and others on his party’s right flank for telling a Democratic colleague immigration would be next.

“Immigration is one of the most contentious issues here in Congress, and we’ve shown ourselves completely incapable of dealing with it on a rational basis,” Cornyn said in July. The senator said he’d largely given up on persuading Grassley to abandon his opposition to new STEM immigration provisions. “I would love to have a conversation about merit-based immigration,” Cornyn said. “But I don’t think, under the current circumstances, that’s possible.”

Cornyn blamed that in part on the far right’s reflexive outrage to any easing of immigration restrictions. “Just about anything you say or do will get you in trouble around here these days,” he said.

Given that reality, few Republicans are willing to stick their necks out on the issue.

“If you look at the messaging coming out of [the National Republican Senatorial Committee] or [the Republican Attorneys General Association], it’s all ‘border, border, border,’” said Rebecca Shi, executive director of the American Business Immigration Coalition. Shi said even moderate Republicans hesitate to publicly advance arguments “championing these sensible visas for Ph.D. STEM talents for integrated circuits for semiconductors.”

“They’re like … ‘I can’t say those phrases until after the elections,’” Shi said.

That skittishness extends to state-level officials — Ohio’s Husted spent some time expounding on the benefits of “bringing talented people here to do the work in America, rather than having companies leave America to have it done somewhere else.” He suggested that boosting STEM immigration would be key to Intel’s success in his state. But when asked whether he’s taken that message to Ohio’s congressional delegation — after all, he said he’d been pestering them to pass the chip subsidies — Husted hedged.

“My job is to do all I can for the people of the state of Ohio. There are other people whose job it is to message those other things,” Husted said. “But if asked, you heard what my answer is.”

Of course, Republicans also pin some of the blame on Democrats. “The administration ignores the fire at the border and the chaos there, which makes it very hard to have a conversation about controlling immigration flows,” Cornyn said.

And while Democratic lawmakers reject that specific concern, some admit their side hasn’t prioritized STEM immigration as it should.

“Neither team has completely clean hands,” said Sen. Mark Warner, the chair of the Senate Intelligence Committee. Warner noted that Democrats have also sought to hold back STEM immigration fixes as “part of a sweetener” so that business-friendly Republicans would in turn back pathways to citizenship for undocumented immigrants. He also dinged the chip companies, claiming the issue is “not always as straightforward” as the industry would like to frame it and that tech companies sometimes hope to pay less for foreign-born talent.

But Warner still supports the effort to lift green card caps for STEM workers. “Without that high-skilled immigration, it’s not like those jobs are going to disappear,” he said. “They’re just gonna move to another country.”

And despite their rhetoric, it’s hard to deny that congressional Republicans are largely responsible for continued inaction on high-skilled immigration — even as their allies in the national security space become increasingly insistent.

Stuck on STEM immigration

Though they’ve had to shrink their ambitions, lawmakers working to lift green card caps for STEM immigrants haven’t given up. A jurisdictional squabble between committees in July prevented advocates from including in the House’s year-end defense bill a provision that would’ve nixed the caps for Ph.D.s in “critical” STEM fields. They’re now hoping to shoehorn the provision into the Senate’s defense bill instead, and have tapped Republican Sen. Thom Tillis of North Carolina as their champion in the upper chamber.

But Tillis is already facing pushback from the right. And despite widespread support, few truly believe there’s enough momentum to overcome Grassley and a handful of other lawmakers willing to block any action.

“Most members on both sides recognize that this is a problem they need to resolve,” said Intel’s Shahoulian. “They’re just not at a point yet where they’re willing to compromise and take the political hits that come with it.”

The global chip industry is moving in the meantime. While most companies are still planning to set up shop in the U.S. regardless of what happens with STEM immigration, Shahoulian said inaction on that front will inevitably limit the scale of investments by Intel and other firms.

“You’re already seeing that dynamic playing out,” he said. “You’re seeing companies set up offices in Canada, set up offices elsewhere, move R&D work elsewhere in the world, because it is easier to retain talent elsewhere than it is here.”

“This is an issue that will progressively get worse,” Shahoulian said. “It’s not like there will be some drop-dead deadline. But yeah, it’s getting difficult.”

Intel is still plowing ahead in Johnstown — backhoes are churning up dirt, farmers have been bought out of homes owned by their families for generations and the extensive water and electric infrastructure required for eight chip factories is being laid. Whether those bets will pay off in the long-term may rest on Congress’ ability to thread the needle on STEM immigration. And there’s little optimism at the moment.

Sen. Maria Cantwell, the chair of the Senate Commerce Committee, said she sometimes wishes she could “shake everybody and tell them to wake up.” But she believes economic and geopolitical realities will force Congress to open the door to high-skilled foreign workers — eventually.

“I think the question is whether you do that now or in 10 years,” Cantwell said. “And you’ll be damn sorry if you wait for 10 years.”

Sat, 30 Jul 2022 23:00:00 -0500 en text/html https://www.politico.com/news/2022/07/31/microchip-immigration-tech-00048242 Killexams : Colleges Focus on Web App Security

The ever-expanding number of mobile users running web apps has raised the profile of the IT security staff at Chapman University in Orange, Calif. Today, students use web browsers on mobile devices to access event calendars, check bus schedules, view grades, read assignments and participate in discussions.

Todd Plesco, the university’s director of information security, says IT security’s role will only expand as the college deploys a web-based version of Oracle PeopleSoft. The new enterprise, resource and planning system lets faculty and staff access human resources, finance and student record information via web browsers.

Keeping these web apps secure requires multiple layers of defense, and Plesco says penetration testing serves as the first layer. The IT staff also bolsters security with Fortinet’s FortiGate web application firewall, a product that complements the university’s mix of Fortinet firewalls for its existing network.

“We know that as we add more web applications, we will have to step up security. We’re taking it one step at a time,” Plesco says, adding that while penetration testing is still done manually, the university may switch to a commercial tool sometime soon.

Top Priority

Jeff Wilson, principal analyst with Infonetics Research, says there are many reasons why colleges and universities should make securing web applications a top priority. Mobile versions of web apps are yet another stream of code that must be maintained, managed and checked for vulnerabilities.

“Custom code, or simply poor coding that leaves vulnerabilities in the code during development, can cause real security problems,” Wilson says.

“If you have the right tools and can get at the code to fix the problems, you’ll be in pretty good shape. But if you don’t have access to the code because the application was outsourced or built on a platform where you are at the mercy of the platform developer, it’s more difficult to find and fix vulnerabilities,” he adds.

At Carnegie Mellon University in Pittsburgh, development and testing of web applications takes place campuswide.

86%
The percentage of web applications that are vulnerable to an injection attack, where internal databases are accessed through a website

SOURCE: 2011 Top Cyber Security Risks Report (HP)

“We have IT shops all over campus delivering web-based applications using different technology and tools,” explains Mary Ann Blair, the university’s director of information security.

Because app development is widely distributed across campus, Blair’s staff focuses on publishing security guidelines, providing design consulting and review, hosting training opportunities and conducting penetration testing.

“The goal is to ensure that campus developers are equipped to deploy web apps that can defend against common attacks such as SQL injection, cross-site scripting and cross-site request forgery,” Blair adds.

Tools of the App Security Trade

There are several possible tools that colleges and universities can use to ensure the security of their web apps, including penetration testing and web application firewalls.

Penetration testing tools, such as IBM Rational AppScan and Tenable Network Security’s Nessus ProfessionalFeed, actively try to find vulnerabilities in web apps caused by problems such as cross-site scripting and SQL injection. They work by simulating the methods real attackers might use, but without actually damaging the web application. Typical features of these tools include both static and dynamic testing, content audits  (for example, for adult content and personally identifiable information), and the ability to pinpoint specific lines of code causing problems. They are also used for compliance auditing.

Web application firewalls are just that: firewalls that protect web applications. Marketed by providers such as Fortinet, Barracuda Networks, F5 Networks, WatchGuard Technologies and Imperva, these products block threats such as cross-site scripting, SQL injection, buffer overflows and denial of service cookie poisoning. They can also help organizations comply with the Payment Card Industry Data Security Standard. Other features include load balancing and Secure Sockets Layer offloading and acceleration.

Although these tools are invaluable, there is also great value in old-fashioned ingenuity, says Jeff Wilson, principal analyst at Infonetics.

“Whatever investment you make in web application security, there will still be bugs you miss,” he says. “Consider trying the crowdsourcing approach, like Google does. They pay a bounty to anyone who finds bugs in their code.”

Sun, 26 Jun 2022 12:00:00 -0500 Karen D. Schwartz en text/html https://edtechmagazine.com/higher/article/2012/06/colleges-focus-web-app-security
Killexams : Comprehensive Change Management for SoC Design By Sunita Chulani1, Stanley M. Sutton Jr.1, Gary Bachelor2, and P. Santhanam1
1 IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 USA
2 IBM Global Business Services, PO BOX 31, Birmingham Road, Warwick CV34 5JL UK

Abstract

Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development.  We propose to address this problem by introducing comprehensive change management.  Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics:   One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.

1.    INTRODUCTION

SoC designs are becoming increasingly complex.  Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”

Current methodology and tools for designing SoC need to evolve with market demands in key areas:  First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality.  Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts.  Techniques used for these are often ad hoc or manual, and the cost of failure is high.  This makes it difficult for a distributed group team     to be productive and inhibits the early, controlled reuse of design products and IP.  Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.

We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design.  Change management is widely practiced in the software development industry.  There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync [4], ClioSoft SOS [2], IC Manage Design Management [3], and Rational ClearCase/ClearQuest [1], as well as numerous proprietary, “home-grown” systems.  But to date change management remains an under-utilized technology in electronic design.

In SoC design, change management can help with many problems.  For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.

More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently.  For design managers, this supports decision-making as to whether, when, and how to make or accept changes.  For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.

In this paper we focus on two elements of this approach for SoC design.  One is the specification of representative use cases in which change management plays a critical role.  These show places in the SoC development process where information important for managing change can be gathered.  They also show places where appropriate information can be used to manage the impact of change.  The second element is the specification of a generic schema for modeling design entities and their interrelationships.  This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.

The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.     

2.    BACKGROUND

We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling.  They identified 26 problem areas for change management in electronic design.  We categorized these as follows:

  • visibility into project status
  • day-to-day control of project activities
  • organizational or structural changes
  • design method consistency
  • design data consistency

Major themes that crosscut these included:

  • visibility and status of data
  • comprehensive change management
  • method definition, tracking, and enforcement
  • design physical quality
  • common approach to problem identification and handling

We held a workshop with the SMEs to prioritize these problems, and two emerged     as the most significant:  First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.

To realize these goals, two basic kinds of information are necessary:  1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design.  We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.

3.    USE CASES

This section describes typical use cases in the SoC design process.  Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change.  Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.


Figure 1.  Use cases in SoC design

In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget.  New Project is the case in which a new project is created from the beginning.  The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints.  In all of these use cases it is important to institute proper change controls from the outset.  New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.    

Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state.  New Design Elements addresses the original creation of new design elements.  These become new entries in the change-management system.  The Implement Change use case entails the modification of an existing design element (such as fixing a bug).  It is triggered in response to a change request and is supported and governed by change-management data and protocols.

The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed.  Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.

4.    CHANGE-MANAGEMENT SCHEMA

The main goal of the change-management schema is to enable the capture of all information that might contribute to change management

4.1     Overview

The schema, which is defined in the Unified Modeling Language (UML) [5], consists of several high-level packages (Figure 2).


Click to enlarge

Figure 2.  Packages in the change-management schema

Package Data represents types for design data and metadata.  Package Objects and Data defines types for objects and data.  Objects are containers for information, data represent the information.  The main types of object include artifacts (such as files), features, and attributes.  The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on.  It is changes to these things for which management is most needed.

The package Types defines types to represent the types of objects and data.  This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.

Package Attributes defines specific types of attribute.  The basic attribute is just a name-value pair that is associated to an object.  (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.)  Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).

Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints.  Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release).  Collections are also used in defining other elements in the schema (for example, baselines and change sets).

The package Relationships defines types of relationships.  The basic relationship type is an ordered collection of a fixed number of elements.  Subtypes provide directionality, element typing, and additional semantics.  Relationships are important for change management because they can define various types of dependencies among design data and resources.  Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools.  Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,

The package Specifications defines types of data specification and definition.  Specifications specify an informational entity; definitions denote a meaning and are used in specifications.

Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers.  Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes.  Indeed, minimizing the resources needed to handle changes is one of the goals of change management.

Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).

Package Events defines types and instances of events.  Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.

The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc.  Actions are important for change in that actions cause change.  Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).

Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action.  It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled.  Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected.  Through this, particular dependency relationships can be established between the objects, data, and resources.  This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.


Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects.  The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.

The final package in Figure 2 is the Change package.  It defines types that for representing change explicitly.  These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects.  They can include a reference to an action execution that caused the change

The subpackage Change Requests includes types for modeling change requests and responses.  A change request has a type, description, state, priority, and owner.  It can have an associated action definition, which may be the definition of the action to be taken in processing the change request.  A change request also has a change-request history log.

4.2    Example

An example of the schema is shown in Figure 3.  The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.


Click to enlarge

Figure 3.  Example of change-management data

The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects.  The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact.  Execution of the compiler constitutes an action that defines the relationship.  The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).

5.    USE CASE IMPLEMENT CHANGE

Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4.  This use case is illustrated in Figure 4.


Click to enlarge

Figure 4.  State diagram for use case Implement Change

The Implement Change use case addresses the modification of an existing design element (such as fixing a bug).  It is triggered by a change request.  The first steps of this use case are to identify and evaluate the change request to be handled.  Then the relevant baseline is located, loaded into the engineer’s workspace, and verified.  At this point the change can be implemented.  This begins with the identification of the artifacts that are immediately affected.  Then dependent artifacts are identified and changes propagated according to dependency relationships.  (This may entail several iterations.)  Once a stable state is achieved, the modified artifacts are Verified and regression tested.  Depending on test results, more changes may be required.  Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.

6.    CONCLUSIONS

This paper explores the role of comprehensive change management in SoC design, development, and delivery.  Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate.  Change management can benefit both design managers and engineers.  It is increasingly essential for improving productivity and reducing time and cost in SoC projects.

ACKNOWLEDGMENTS

Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab.  Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences.  We are also grateful to the many other people across IBM who contributed their time and expertise.

REFERENCES

1.    http://www306.ibm.com/software/awdtools/changemgmt/enterprise/index.html

2.    http://www.cliosoft.com/products/index.html

3.    http://www.icmanage.com/products/index.html

4.    http://www.ins.clrc.ac.uk/europractice/software/matrixone.html

5.    http://www.uml.org/

Mon, 18 Jul 2022 12:00:00 -0500 en text/html https://www.design-reuse.com/articles/15745/comprehensive-change-management-for-soc-design.html
Killexams : New IBM zEnterprise mainframe server advances smarter computing for companies and governments

IBM today announced a new server -- a powerful, version of the IBM zEnterprise System that's the most scalable mainframe ever – to extend the mainframe's innovation and unique qualities to more organizations, especially companies and governments in emerging markets in Asia, Africa and elsewhere.

The new IBM zEnterprise 114 mainframe server follows the introduction of the zEnterprise System for the world's largest banks, insurance companies and governments in July 2010.  The new server, which allows mid-sized organizations to enjoy the benefits of a mainframe as the foundation for their data centers, costs 25%1 less and offers up to 25%2 percent more performance than its predecessor, the System z10 BC.  Clients can consolidate workloads from 40 Oracle server cores on to a new z114 with just three processors running Linux3. Compared to the Oracle servers the new z114 costs 80% less with similar dramatic savings on floor space and energy3.

At a starting price of under $75,000 -- IBM's lowest ever price for a mainframe server -- the zEnterprise 114  is an especially attractive option for emerging markets experiencing rapid growth in new services for banking, retail, mobile devices, government services and other areas.  These organizations are faced with ever-increasing torrents of data and want smarter computing systems that help them operate efficiently, better understand customer behavior and needs, optimize decisions in real time and reduce risk.

IBM also introduced new features that allow the zEnterprise System to integrate and manage workloads on additional platforms.  New today is support for select System x blades within the zEnterprise System. These System x blades can run Linux x86 applications unchanged, and in the future will be able to run Windows applications.   With these capabilities, the zEnterprise System including the new z114 can help simplify data centers with its ability to manage workloads across mainframe, POWER7 and System x servers as a single system.  Using the zEnterprise Blade Center Extension (zBX), customers can also extend mainframe qualities, such as governance and manageability, to workloads running across multiple platforms.

Smaller firms like PSP --a provider of credit card processing services--turned to a mainframe for the first time to consolidate multiple racks of HP servers on to a single IBM Business Class mainframe with just 2 processors. Additional available capacity already built into their entry level mainframe server is designed to meet their rapid growth projection needs without increasing their IT footprint.

IBM System z servers are also making inroads in emerging markets like Africa. Governments and businesses in Cameroon, Senegal and Namibia have all recently purchased new IBM mainframe servers.

zEnterprise 114

With the z114 clients can start with smaller configurations and access additional capacity built into the server as needed without increasing the data center footprint or systems management complexity and cost.  The new z114 can also consolidate morean th300 HP Proliant servers running Oracle workloads.4

The z114 is powered by up to 14 of the industry's most sophisticated microprocessors of which up to 10 can be configured as specialty engines. These specialty engines, the System z Application Assist Processor (zAAP), the System z Integrated Information Processor (zIIP), and the Integrated Facility for Linux (IFL), are designed  to integrate new Java, XML, and Linux applications and technologies with existing workloads, as well to optimize system resources and reduce costs on the mainframe.  For example, using a fully configured machine running Linux for System z, clients can create and maintain a Linux virtual server in the z114 for as little as $500 per year.

The z114 offers up to an 18%6 improvement on processing traditional z/OS workloads and a 25%7 improvement on microprocessor intensive workloads compared to the z10 BC.

The z114 runs all the latest zEnterprise operating systems including the new z/OS V 1.13 announced today.   This new version adds new software deployment and disk management capabilities.  It also offers enhanced autonomics and early error detection features as well as the latest encryption and compliance features extending the mainframe's industry leading security capabilities.

Hybrid Computing

In a move that will further simplify data center management and reduce costs, IBM is also announcing the ability to integrate and manage workload on select IBM System x servers running Linux as part of the zEnterprise System8.  Support for Microsoft Windows on select System x servers will follow.

This capability is delivered through the IBM zEnterprise Unified Resource Manager and the IBM zEnterprise BladeCenter Extension (zBX), which allows customers to integrate the management of zEnterprise System resources as a single system and extend mainframe qualities, such as governance and manageability, to workloads running on other select servers.

The zEnterprise System can now integrate and manage workloads running on tens of thousands of off-the-shelf applications on select general purpose IBM POWER7-based and System x blades as well as the IBM Smart Analytics Optimizer to analyze data faster at a lower cost per transaction and the IBM WebSphere DataPower XI50 for integrating web based workloads.

Up to 112 blades can be integrated and managed as part of zBX.  Different types of blades and optimizers can be mixed and matched with in the same BladeCenter chassis.

New Financing Options

IBM Global Financing offers attractive financing options for existing IBM clients looking to upgrade to a z114 as well as clients currently using select HP and Oracle servers.

For current System z clients, IBM Global Financing (IGF) can buyback older systems for cash and upgrade customers to the z114 on a Fair Market Value (FMV) lease, which offers a predictable monthly payment. IGFalso will “Sweep the Floor” of existing HP Itanium or Oracle Sun Servers with the purchase of a z114. IGF will remove and recycle these older systems in compliance with environmental laws and regulations and pay clients the fair market value of HP and Oracle-Sun servers. IGF is also offering a 6 month deferral of any hardware, software, services or any combination for clients who wish to upgrade now, but pay later.

IGF is also offering a 0% financing for 12 months on any IBM Software, including IBM middleware for the z114 such as Tivoli, WebSphere, Rational, Lotus and Analytics products.

Mon, 11 Jul 2022 12:01:00 -0500 en text/html https://www.albawaba.com/new-ibm-zenterprise-mainframe-server-advances-smarter-computing-companies-and-governments-382812
Killexams : What is B2B Marketing? And How to Do It Successfully

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Wed, 20 Jul 2022 21:30:00 -0500 en-US text/html https://www.business2community.com/b2b-marketing/what-is-b2b-marketing-and-how-to-do-it-successfully-02379116
Killexams : Free On-Demand Webinar: Winning Strategies to Drive Product Innovation & Growth

What the pandemic has taught everyone is that pivoting business operations in crisis can be a challenge. In this episode of our Leadership Lessons series, host Jason Nazar, co-founder/CEO of Comparably, speaks with a renowned leader in sales, operations, and technology to discuss the winning strategies for small and medium-sized businesses to drive product innovation and growth during this monumental time.

With more than 25 years of experience, former IBM executive Burton Goldfield has transformed TriNet (NYSE: TNET) into a leading cloud-based HR provider and professional employer organization for SMBs. Founded in 1988 in the San Francisco Bay Area, TriNet's net revenue has more than quadrupled during Goldfield's tenure as CEO since 2008. From Main Street to Wall Street, the platform offers full-service HR solutions and access to human capital expertise, benefits, risk mitigation and compliance, payroll and real-time technology. In addition to Goldfield sharing his biggest leadership lessons, other courses include:

  • Importance of employee engagement in a virtual work environment 
  • Critical role of privacy and security in the new work world
  • Why digital transformation must be a key part of your business plan
  • Winning strategies for pivoting business operations in a crisis
  • Innovation: A requirement for business growth and success 

Complete the registration form below to watch now!

About the Speakers

Since 2008, Burton M. Goldfield has served as president, CEO and board member of TriNet (NYSE: TNET). With more than 25 years of experience in sales, operational, and technology leadership roles, he is known for driving product innovation and business growth. Burton has transformed the company into a leading cloud-based HR provider and professional employer organization. TriNet’s net revenue has more than quadrupled during his tenure. Prior to TriNet, Burton was CEO at Ketera Technologies, a Santa Clara-based SaaS provider to FORTUNE 2000 companies. Before that, Burton served as SVP, Worldwide Field Operations at Hyperion Solutions Corporation and VP of Worldwide Sales for IBM Corporation’s Rational Software division.

Jason Nazar brings 15 years of experience as a serial entrepreneur, investor, and advisor to his role as co-founder/CEO of Comparably, a leading workplace culture and compensation monitoring site. Previously, he was co-founder/CEO of Docstoc (acquired by Intuit in 2013), one of the most visited content sites in the world with the widest selection of professional documents and business resources. Jason was named one of the “Most Admired CEOs in L.A.” by the Los Angeles Business Journal and appointed “Entrepreneur in Residence for the City of Los Angeles” in 2016-2018 by Mayor Eric Garcetti.

Fri, 06 Aug 2021 07:56:00 -0500 en text/html https://www.entrepreneur.com/page/winning-strategies-to-drive-product-innovation/371599
Killexams : ALM techniques can help keep your apps in play

For developers and enterprise teams, application life-cycle management in today’s development climate is an exercise in organized chaos.

As movements such as agile, DevOps and Continuous Delivery have created more hybrid roles within a faster, more fluid application delivery cycle, there are new definitions of what each letter in the ALM acronym means. Applications have grown into complex entities with far more moving parts—from modular components to microservices—delivered to a wider range of platforms in a mobile and cloud-based world. The life cycle itself has grown more automated, demanding a higher degree of visibility and control in the tool suites used to manage it all.

Kurt Bittner, principal analyst at Forrester for application development and delivery, said the agile, DevOps and Continuous Delivery movements have morphed ALM into a way to manage a greatly accelerated delivery cycle.

“Most of the momentum we’ve seen in the industry has been around faster delivery cycles and less about application life-cycle management in the sense of managing traceability and requirements end-to-end,” said Bittner. “Those things are important and they haven’t gone away, but people want to do it really fast. When work was done manually, ALM ended up being the core of what everyone did. But as much of the work has become automated—builds, workflows, testing—ALM has become in essence a workflow-management tool. It’s this bookend concept that exists on the front end and then at the end of the delivery pipeline.”

Don McElwee, assistant vice president of professional services for Orasi Software, explained how the faster, more agile delivery process correlates directly to an organization’s bottom line.

“The application life cycle has become a more fluid, cost-effective process where time to market for enhancements and new products is decreased to meet market movements as well as customer expectations,” said McElwee. “It is a natural evolution of previous life cycles where the integration of development and quality assurance align to a common goal. By reducing the amount of functionality to be deployed to a production environment, testing and identifying issues earlier in the application life cycle, the overall cost of building and maintaining applications is decreased while increasing team unity and productivity.”

In addition to the business changes taking place in ALM, the advent of agile, DevOps and Continuous Delivery has also driven a cultural change, according to Kartik Raghavan, executive vice president of worldwide engineering at CollabNet. He said ALM is undergoing a fundamental enterprise shift from a life-cycle functionality focus toward a delivery process colored more by the consumer-focused value of an application.

“All these movements, whether it’s agile or DevOps or Continuous Delivery, try to take the focus away from the individual pieces of delivery to more of the ownership at an application level,” said Raghavan. “It’s pushing ALM toward more of a pragmatic value of the application as a whole. That is the big cultural change.”

ALM for a new slate of platforms
Bittner said ALM tooling has also segmented into different markets for different development platforms. He said development tool chains are different for everything from mobile and cloud to Web applications and embedded software, as developers deploy applications to everything from a mobile app store to a cloud platform such as Amazon’s AWS, Microsoft’s Azure or OpenStack.

“[Tool chains] often fragment along the technology platform lines,” said Bittner. “People developing for the cloud’s main goal is to get things to market quickly, so they tend to have a much more diverse ecosystem of tools, while mobile is so unique because the technology stack is changing all the time and evolving rapidly.”

Hadi Hariri, developer advocacy lead at JetBrains, said the growth of cloud-based applications and services in particular has shifted customer expectations when it comes to ALM.

“Before, having on-site ALM solutions was considered the de facto option,” he said. “Nowadays, more and more customers don’t want to have to deal with hosting, maintenance [or] upgrades of their tools. They want to focus on their own product and delegate these aspects to service and tool providers.”

CollabNet’s Raghavan said this shift toward a wider array of platforms has changed how developers and ALM tool providers think about software. On the surface, he said he sees cloud, mobile, Web and embedded as different channels for delivering applications.

He said there is more focus when developing and managing an application on changing the way a customer expects to consume an application.

“Each of these channels represents another flavor of how they enable customers to consume applications,” said Raghavan. “With the cloud, that means the ability to access the application anywhere. Customers expect to log into an application and quickly understand what it does. Mobile requires you to build an application that leverages the value of the device. You need an ALM suite that recognizes the different tools needed to deliver every application to the cloud, prepare that application for mobile consumption, and even gives you the freedom to think about putting the app on something like a Nest thermostat.”

What’s in an application?
Applications are becoming composites, according to Forrester’s Bittner, and he said ALM must evolve into a means of managing the delivery of these composite applications and the feedback coming from their modular parts integrated with the cloud.

“A mobile application is typically not standalone. It talks to services running in the cloud that talk to other services wrapping legacy systems to provide data,” he said. “So even a mobile application, which sounds like a relatively whole entity, is actually a network of things.”

Matt Brayley-Berger, worldwide product marketing manager of application life cycle and quality for HP, expanded on this concept of application modularity. With a composite application containing sometimes hundreds of interwoven components and services, he said the complexity of building releases has gone up dramatically.

“Organizations are making a positive tradeoff around risk,” he said. “Using all of these smaller pieces, the risk of a single aspect of functionality not working has gone down, but now you’re starting to bring in the risk of the entire system not working. In some ways it’s the ultimate SOA dream realized, but the other side means far more complexity to manage, which is where all these new ALM tools and technologies come in.”

Within that application complexity is also the rise of containers and microservices, which Bittner called the next big growth area in the software development life cycle. He said containers and microservices are turning applications from large pieces of software into a network of orchestrated services with far more moving parts to keep track of.

“Containers and microservices are really applicable to everything,” said Bittner. “They’ll lead to greater modularity for different parts of an application, to deliver organizations the ability to develop different parts of an application independently with the option to replace parts at runtime, or [to] evolve at different speeds. This creates a lot of flexibility around developing and deploying an application, which leads to the notion of an application itself changing.”

JetBrains’ Hariri said microservices are, at their core, just a new way to think about existing SOA architecture, combined with containers to create a new deployment model within applications.

“Microservices, while being sometimes touted as the new thing, are actually very similar, if not the same, as a long-time existing architecture: SOA, except nowadays it would be hard to put the SOA label on something and not be frowned upon,” he said.

“Microservices have probably contributed to making us aware that services should be small and autonomous, so in that sense, maybe the word has provided value. Combining them with containers, which contribute to an autonomous deployment model, it definitely does deliver rise to new potential scenarios that can provide value, as well as introduce new challenges to overcome in increasing the complexity of ALM if not managed appropriately.”

Within a more componentized application, Orasi’s McElwee said it’s even more critical for developers and testers throughout the ALM process to meticulously test each component.

“ALM must now be able to handle agile concepts, where smaller portions of development such as Web services change often and need to deployed rapidly to meet customer demand,” said McElwee. “These smaller application component changes must be validated quickly for both individual functional and larger system impacts. There must be an analysis to determine where failures are likely based on history so that higher-risk areas can be validated quickly. The ability to identify tests and associated data components are critical to the success of these smaller components.”

Managing the modern automated pipeline
For enterprise organizations and development teams to keep a handle on an accelerated delivery process with more complex applications to a wider range of platforms, Bittner believes ALM must provide visibility and control across the entire tool chain.

“There’s a tremendous need for a comprehensive delivery pipeline,” he said. “You have Continuous Integration tools handling a large part of the pipeline handing off to deployment automation tools, and once things get in production you have application analytics tools to gather data. The evolution of this ecosystem demands a single dashboard that lets you know where things are in the process, from the idea phase to the point where it’s in the customer’s hands.”

To achieve that visibility and end-to-end control, some ALM solution providers are relying on APIs. TechExcel’s director of product management Jason Hammon said that when it comes to third-party and open-source automation tools for tasks such as bug tracking, test automation or SCM, those services should be tied with APIs without losing sight of the core goals of ALM.

“At the end of the day, someone is still planning the requirements,” he said. “They’re not automating that process. Someone is still planning the testing and implementing the development. The core pieces of ALM are still there, but we need the ability to extend beyond those manual tasks and pull in automation in each stage.

“That’s the whole point of the APIs and integrations: Teams are using different tools. As the manager I can log in and see how many bugs have been found, even if one team is logging bugs in Bugzilla, another team is logging them in DevTrack, and another team is logging them in JIRA. We can’t say, ‘Here’s this monolithic solution and everyone should use just this.’ People don’t work that way anymore.”

Keeping track of all these automated processes and services running within a delivery pipeline requires constant information. Modern ALM suites are built on communication between teams and managers, as well as streams of real-time notifications through dashboards.

“Anywhere in the process where you have automation, metrics are critical,” said HP’s Brayley-Berger. “Being able to leverage metrics created through automation has become a valuable way to course-correct. We’re moving more toward an opportunity for organizations to use these pieces of data to predict future performance. It almost sounds like a time-travel analogy, but the only way for organizations to go even faster than they already are is to think ahead: What should teams automate? Where are the projects likely to face challenges?”

An end-to-end ALM solution plugged into all this data can also overwhelm teams working within it with excess information, said Paula Rome, senior product manager at Seapine Software.

“We want to make sure developers are getting exactly what they need for their day-to-day job,” said Rome. “Their data feed needs to be filled with notifications that are actually useful. The ALM tool should in no way be preventing them from going to a higher-level view, but we want to be wary of counterproductive interruptions.”

Where ALM goes from here
Rome said it was not so long ago that ALM’s biggest problem was that nobody knew of it. Now, in an environment where more and more applications exist purely in the cloud rather than in traditional on-premise servers, she said ALM provides a feeling of stability.

“Organizations are still storing data somewhere, there are still multiple components, multiple roles and team members that need to be up to date with information so you’re not losing the business vision,” said Rome. “But with DevOps and the pressure of Continuous Delivery, when the guy who wrote the code is the one fixing the bug in production, an ALM tool gives you a sort of DevOps safety net. You need information readily available to you. You can get a sense of the source code and you can start following this trail of clues to what’s going on to make that quick fix.”

As the concepts of what applications and life cycles are have changed, TechExcel’s Hammon said ALM is still about managing the same process.

“You still need to be able to see your project, see its progress and make sure there’s traceability from those requirements through the testing to make sure you’re on track, and that you’ve delivered both what you and the customer expected you to,” said Hammon. “Even if you’re continuously delivering, it’s a way to track what you need to do and what you’ve done. That never changes, and it may never change.”

What developers need in a tool suite for the modern application life cycle

Hadi Hariri
“A successful tool is one that provides value by removing grunt work and errors via automation. Its job is to allow developers to focus on the important tasks, not fight the tool.”

Don McElwee
“Developers should look for a suite of tools that can provide a holistic solution to maximize collaboration with different technologies and other teams such as Quality Assurance, Data Management and Operations. By integrating technologies that offer support to different departments, developers can maximize the talents of those individuals and prove that their code can work and be comfortable with potential real-world situations. No longer will they wonder how it will work, but can tell exactly what it does and why it will work.”

Jason Hammon
“The focus should really be traceability. You can manage requirements, implementation and testing, but developers need to look for something that’s flexible with an understanding that if they should want to change their process later, that they have flexibility to modify their process without being locked into one methodology. You also need flexibility in the tools themselves, and tools that can scale up with the customers and data you have. You need tools that will grow with you.”

Paula Rome
“Developers should do a quick bullet list. What aren’t they happy about in their current process? What are they really trying to fix with this tool? Are things falling through the cracks? Are you having trouble getting the information you need to answer questions right now, not next week? Do you find yourself repeating manual processes over and over? Play product manager for a moment and ask yourself what those high-level goals are; what ALM problems you’re really trying to solve.”

Kartik Raghavan
“[Developers] need to differentiate practitioner tools that help you do a job at a granular level from the tools that deliver you a level of control, governance or visibility into an application. Especially for an enterprise, you have to first optimize tool delivery. Whatever gets you the best output of high-quality software quickly. There are rules and best practices behind that, though. How do you manage your core code? What model have you enabled for it? Do you want a centralized model or a distributed model, and when you roll those things out, you need to set controls. You need to get that right, but with the larger focus of getting rapid delivery automation in place for your Continuous Delivery life cycle.”

Matt Brayley-Berger
“Any tool set needs to be usable. That sounds simple, but oftentimes it’s frustrating when it’s so far from the current process. The tool itself may also have to annotate the existing processes rather than forcing change to connect that data. You need a tool that’s usable for the developer, but with the flexibility to connect to other disciplines and do some of the necessary tracking on the ground level that’s critical in organizations to report things back. Teams shouldn’t have to sacrifice reporting and compliance for something that’s usable.”

A guide to ALM tool suites
Atlassian:
Teams use Atlassian tools to work and collaborate throughout the software development life cycle: JIRA for tracking issues and planning work; Confluence for collaborating on requirements; HipChat for chat; Bitbucket for collaborating on code; Stash for code collaboration and Git repository management; and Bamboo for continuous integration and delivery.

Borland, a Micro Focus company: Borland’s Caliber, StarTeam, AccuRev and Silk product offerings make up a comprehensive ALM suite that provides precision, control and validation across the software development life cycle. Borland’s products are unique in their ability to integrate with each other—and with existing third-party tools—at an asset level.

CollabNet: CollabNet TeamForge ALM is an open ALM platform that helps automate and manage the enterprise application life cycle in a governed, secure and efficient fashion. Leading global enterprises and government agencies rely on TeamForge to extract strategic and financial value from accelerated application development, delivery and DevOps.

HP: HP ALM is an open integration hub for ALM that encompasses requirements, test and development management. With HP ALM, users can leverage existing investments; share and reuse requirements and asset libraries across multiple projects; see the big picture with cross-project reporting and preconfigured business views; gain actionable insights into who is working on what, when, where and why; and define, manage and track requirements through every step of the life cycle.

IBM: IBM’s Rational solution for Collaborative Lifecycle Management is designed to deliver effective ALM to agile, hybrid and traditional teams. It brings together change and configuration management, quality management, requirements management, tracking, and project planning in a common unified platform.

Inflectra: SpiraTeam is an integrated ALM suite that provides everything you need to manage your software projects from inception to release and beyond. With more than 5,000 customers in 100 different countries using SpiraTeam, it’s the most powerful yet easy-to-use tool on the market. It includes features for managing your requirements, testing and development activities all hosted either in our secure cloud environment or available for customers to install on-premise.

JetBrains: JetBrains offers tools for both individual developers as well as teams. TeamCity provides Continuous Integration and Deployment, while YouTrack provides agile project and bug management, which has recently been extended with Upsource, a code review and repository-browsing tool. Alongside its individual developer offerings, which consist of its IDEs for the most popular languages on the market as well as .NET tools, JetBrains covers most of the needs of software development houses, moving toward a fully integrated solution.

Kovair: Kovair provides a complete integrated ALM solution on top of a Web-based central repository. The configurability of Kovair ALM allows users to collaborate with the level of functionality and information they need, using features like a task-based automated workflow engine with visual designer, dashboards, analytics, end-to-end traceability, easy collaboration between all stakeholders, and support for both agile and waterfall methodologies.

Microsoft: Visual Studio Online (VSO), Microsoft’s cloud-hosted ALM service, offers Git repositories; agile planning; build automation for Windows, Linux and Mac; cloud load testing; DevOps features like Continuous Deployment to Windows, Linux and Microsoft Azure; application analytics; and integration with third-party ALM tools. VSO is based on Team Foundation Server, and it integrates with Visual Studio and other popular code editors. VSO is free to the first five users on a team or with MSDN.

Orasi: Orasi is a leading provider of software, support, training, and consulting services using market-leading test-management, test automation, performance intelligence, test data-management and coverage, Continuous Delivery/Integration, and mobile testing technologies. Orasi helps customers reduce the cost and risk of software failures by focusing on a complete software quality life cycle.

Polarion: Polarion ALM is a unifying collaboration and management platform for software and multi-system development projects. Providing end-to-end traceability and transparency from requirements to design to production, Polarion’s flexible architecture and licensing model enables companies to deploy just what they need, where they need it, on-premise or in the cloud.

Rommana: Rommana ALM is a fully integrated set of tools and methodologies that provides full traceability among requirements, scenarios, test cases, issue reports, use cases, timelines, change requests, estimates and resources; one common repository for all project artifacts and documentation; full collaboration between all team members around the globe 24×7; and extensive reporting capabilities.

Seapine: Seapine Software’s integrated ALM suite enables product development and IT organizations to ensure the consistent release of high-quality products, while providing traceability, reporting and compliance. Featuring TestTrack for requirements, issue, and test management; Surround SCM for configuration management; and QA Wizard Pro for automated functional testing and load testing, Seapine’s tools provide a single source of truth for project development artifacts, statuses and quality to reduce risks inherent in complex product development.

Serena Software: Serena provides secure, collaborative and process-based ALM solutions. Dimensions RM improves the definition, management and reuse of requirements, increasing visibility and collaboration across stakeholders; Dimensions CM simplifies collaborative parallel development, improving team velocity and assuring release readiness; and Deployment Automation enables deployment pipeline automation, reducing cycle time and supporting rapid delivery.

Sparx Systems: Sparx Systems’ flagship product, Enterprise Architect provides full life-cycle modeling for real-time and embedded development, software and systems engineering, and business and IT systems. Based on UML and related specifications, Enterprise Architect is a comprehensive team-based modeling environment that helps organizations analyze, design and construct reliable, well-understood systems.

TechExcel: TechExcel DevSuite is specifically designed to manage both agile and traditional projects, as well as streamline requirements, development and QA processes. The fully definable user interface allows complete workflow and UI customization based on project complexity and the needs of cross-functional teams. DevSuite also features built-in multi-site support for distributed teams, two-way integration with MS Word, and third-party integrations using RESTful APIs. DevSuite’s dynamic, real-time reporting and analytics also enable faster issue detection and resolution.

Wed, 20 Dec 2017 17:19:00 -0600 en-US text/html https://sdtimes.com/agile/alm-techniques-can-help-keep-your-apps-in-play/
Killexams : Monarch Casino: Best Gaming Stock Bet, Say Portfolio Wealth Builders
Business on Wall Street in Manhattan

Pgiam/iStock via Getty Images

The primary focus of this article is Monarch Casino & Resort, Inc. (NASDAQ:MCRI)

Investment Thesis

21st Century paces of change in technology and rational behavior (not of emotional reactions) seriously disrupts the commonly accepted productive investment strategy of the 20th century.

One required change is the shortening of forecast horizons, with a shift from the multi-year passive approach of buy and hold to the active strategy of specific price-change target achievement or time-limit actions, with reinvestment set to new nearer-term targets.

That change avoids the irretrievable loss of invested time spent destructively by failure to recognize shifting evolutions like the cases of IBM, Kodak, GM, Xerox, General Electric, and many others.

It recognizes the progress in medical, communication and information technologies and enjoys their operational benefits already present in extended lifetimes, trade-commission-free investments, and coming benefits in transportation utilizations and energy usage.

But it requires the ability to make valid direct comparisons of value between investment reward prospects and risk exposures in the uncertain future. Since uncertainty expands as the future dimension increases, shorter forecast horizons are a means of improving the reward-to-risk comparison.

That shortening is now best attended at the investment entry point by knowing Market-Maker ("MM") expectations for coming prices. When reached, their updates are then reintroduced at the exit/reinvestment point and the term of expectations for the required coming comparisons are recognized as the decision entry point to move forward.

The MM's constant presence, extensive global communications and human resources dedicated to monitoring industry-focused competitive evolution sharpens MM price expectations, essential to their risk-avoidance roles.

Their roles require firm capital be only temporarily risk-exposed, so are hedged by derivative-securities deals to avoid undesired price changes. The deals' prices and contracts provide a window to MM price expectations.

Information technology via the internet makes investment monitoring and management time and attention efficient despite its increase in frequency.

Once an investment choice is made and buy transaction confirmation is received, a target-price GTC sell order for the confirmed number of shares at the target price or better should be placed. Keeping trade actions entered through the internet on your lap/desk-top or cell phone should avoid trade commission charges. Your broker's internal system should keep you informed of your account's progress.

Your own private calendar record should be kept of the date 63 market days (or 91 calendar days) beyond the trade's confirmation date as a time-limit alert to check if the GTC order has not been executed. If not, then start your exit and reinvestment decision process.

The 3-months' time limit is what we find to be a good choice, but may be extended some if desired. Beyond 5-6 months' time investments start to work against the process and are not recommended.

For investments guided by this article or others by me target prices will always be found as the high price in the MM forecast range.

Description of Equity Subject Company

"Monarch Casino & Resort, Inc., through its subsidiaries, owns and operates the Atlantis Casino Resort Spa, a hotel and casino in Reno, Nevada. The company also owns and operates the Monarch Casino Resort Spa Black Hawk in Black Hawk, Colorado. As of December 31, 2021, its Atlantis Casino Resort Spa featured approximately 61,000 square feet of casino space; 818 guest rooms and suites; 8 food outlets; 2 gourmet coffee and pastry bars; a 30,000 square-foot health spa and salon with an enclosed pool; approximately 52,000 square feet of banquet, convention, and meeting room space. The company's Atlantis Casino Resort Spa also featured approximately 1,400 slot and video poker machines; approximately 37 table games, including blackjack, craps, roulette, and others; a race and sports book; a 24-hour live keno lounge; and a poker room. In addition, its Monarch Casino Resort Spa Black Hawk featured approximately 60,000 square feet of casino space; approximately 1,100 slot machines; approximately 40 table games; 10 bars and lounges; 4 dining options; 516 guest rooms and suites. The company was founded in 1972 and is based in Reno, Nevada."

Source: Yahoo Finance

Estimates by Street Amalysts

Yahoo Finance

These growth estimates have been made by and are collected from Wall Street analysts to suggest what conventional methodology currently produces. The typical variations across forecast horizons of different time periods illustrate the difficulty of making value comparisons when the forecast horizon is not clearly defined.

Risk and Reward Balances Among MCRI Competitors

Figure 1

MM hedging forecasts

blockdesk.com

Used with permission.

The risk dimension is of real price draw-downs at their most extreme point while being held in previous pursuit of upside rewards similar to the ones currently being seen. They are measured on the red vertical scale. Reward expectations are measured on the green horizontal scale.

Both scales are of percent change from zero to 25%. Any stock or ETF whose present risk exposure exceeds its reward prospect will be above the dotted diagonal line. Capital-gain-attractive to-buy issues are in the directions down and to the right.

Our principal interest is in MCRI at location [11], at the lower right-hand edge of the competitor crowd. A "market index" norm of reward~risk trade-offs is offered by SPY at [7]. Most appealing by this Figure 1 view for wealth-building investors is MCRI.

Comparing competitive features of Casino Gaming Providers

The Figure 1 map provides a good visual comparison of the two most important aspects of every equity investment in the short term. There are other aspects of comparison which this map sometimes does not communicate well, particularly when general market perspectives like those of SPY are involved. Where questions of "how likely' are present other comparative tables, like Figure 2, may be useful.

Yellow highlighting of the table's cells emphasize factors important to securities valuations and the security MCRI of most promising of near capital gain as ranked in column [R].

Figure 2

detail comparative data

blockdedk.com

Used with permission.

Why do all this math?

Figure 2's purpose is to attempt universally comparable answers, stock by stock, of: a) How BIG the prospective price gain payoff may be; b) how LIKELY the payoff will be a profitable experience; c) how SOON it may happen; and d) what price drawdown RISK may be encountered during its active holding period.

Readers familiar with our analysis methods after quick examination of Figure 2 may wish to skip to the next section viewing price range forecast trends for MCRI.

Column headers for Figure 2 define investment-choice preference elements for each row stock whose symbol appears at the left in column [A]. The elements are derived or calculated separately for each stock, based on the specifics of its situation and current-day MM price-range forecasts. Data in red numerals are negative, usually undesirable to "long" holding positions. Table cells with yellow fills are of data for the stocks of principal interest and of all issues at the ranking column, [R].

The price-range forecast limits of columns [B] and [C] get defined by MM hedging actions to protect firm capital required to be put at risk of price changes from volume trade orders placed by big-$ "institutional" clients.

[E] measures potential upside risks for MM short positions created to fill such orders, and reward potentials for the buy-side positions so created. Prior forecasts like the present provide a history of relevant price draw-down risks for buyers. The most severe ones actually encountered are in [F], during holding periods in effort to reach [E] gains. Those are where buyers are emotionally most likely to accept losses.

The Range Index [G] tells where today's price lies relative to the MM community's forecast of upper and lower limits of coming prices. Its numeric is the percentage proportion of the full low to high forecast seen below the current market price.

[H] tells what proportion of the [L] demo of prior like-balance forecasts have earned gains by either having price reach its [B] target or be above its [D] entry cost at the end of a 3-month max-patience holding period limit. [ I ] gives the net gains-losses of those [L] experiences.

What makes MCRI most attractive in the group at this point in time is its ability to produce capital gains most consistently at its present operating balance between share price risk and reward at the Range Index [G]. At a RI of 12, today's price is near the bottom of its forecast range, with price expectations to the upside seven times those to the downside. Not our expectations, nut those of Market-Makers acting in support of Institutional Investment organizations build the values of their typical multi-billion-$ portfolios. Credibility of the [E] upside prospect as evidenced in the [I] payoff at +18% is shown in [N].

Further Reward~Risk trade-offs involve using the [H] odds for gains with the 100 - H loss odds as weights for N-conditioned [E] and for [F], for a combined-return score [Q]. The typical position holding period [J] on [Q] provides a figure of merit [fom] ranking measure [R] useful in portfolio position preferences. Figure 2 is row-ranked on [R] among alternative candidate securities, with MCRI in top rank.

Along with the candidate-specific stocks these selection considerations are provided for the averages of some 3,000 stocks for which MM price-range forecasts are available today, and 20 of the best-ranked (by fom) of those forecasts, as well as the forecast for S&P500 Index ETF (SPY) as an equity-market proxy.

Current-market index SPY is only moderately competitive as an investment alternative. Its Range Index of 42 indicates half of its forecast range is to the upside, while three quarters of previous SPY forecasts at this range index produced profitable outcomes.

As shown in column [T] of figure 2, those levels vary significantly between stocks. What matters is the net gain between investment gains and losses actually achieved following the forecasts, shown in column [I]. The Win Odds of [H] tells what proportion of the demo RIs of each stock were profitable. Odds below 80% often have proven to lack reliability.

Recent Forecast Trends of the Primary Subject

Figure 3

daily forecasst trends

blockdesk.com

Used with permission.

Many investors confuse any time-repeating picture of stock prices with typical "technical analysis charts" of past stock price history. These are quite different in their content. Instead, here Figure 3's vertical lines are a daily-updated visual record of price range forecast limits expected in the coming few weeks and months. The heavy dot in each vertical is the stock's closing price on the day the forecast was made.

That market price point makes an explicit definition of the price reward and risk exposure expectations which were held by market participants at the time, with a visual display of their vertical balance between risk and reward.

The measure of that balance is the Range Index (RI).

With today's RI there is 14.8% upside price change in prospect. Of the prior 27 forecasts like today's RI, 25 have been profitable. The market's actions of prior forecasts became accomplishments of +15% gains in 30 market days., or 6 weeks. So history's advantage could be repeated eight times or more in a 252 market-day year, which compounds into a CAGR of +232%.

Also please note the smaller low picture in Figure 3. It shows the past 5-year distribution of Range Indexes with the current level visually marked. For MCRI nearly all exact past forecasts have been of higher prices and Range Indexes.

Conclusion

Based on direct comparisons with MCRI and other Casino Gambling establishments, there are strong wealth-building reasons to prefer a capital-gain seeking buy in Monarch Casino & Resort, Inc. over other examined alternatives.

Fri, 29 Jul 2022 04:37:00 -0500 en text/html https://seekingalpha.com/article/4527451-monarch-casino-best-gaming-stock-bet-say-portfolio-wealth-builders
Killexams : PPG Industries Stock Bottom-Priced By Portfolio Wealth Builders
Business on Wall Street in Manhattan

Pgiam/iStock via Getty Images

Investment Thesis

21st Century paces of change in technology and rational behavior (not of emotional reactions) seriously disrupts the commonly accepted productive investment strategy of the 20th century.

One required change is the shortening of forecast horizons, with a shift from the multi-year passive approach of buy&hold to the active strategy of specific price-change target achievement or time-limit actions, with reinvestment set to new nearer-term targets.

That change avoids the irretrievable loss of invested time spent destructively by failure to recognize shifting evolution like the cases of IBM, Kodak, GM, Xerox, GE and many others.

It recognizes the progress in medical, communication and information technologies and enjoys their operational benefits already present in extended lifetimes, trade-commission-free investments, and coming benefits in transportation utilization and energy usage.

But it requires the ability to make valid direct comparisons of value between investment reward prospects and risk exposures in the uncertain future. Since uncertainty expands as the future dimension increases, shorter forecast horizons are a means of improving the reward-to-risk comparison.

That shortening is now best attended at the investment entry point by knowing Market-Maker expectations for coming prices. When reached, their updates are then reintroduced at the exit/reinvestment point and the term of expectations for the required coming comparisons are recognized as the decision entry point to move forward.

The MM's constant presence, extensive global communications and human resources dedicated to monitoring industry-focused competitive evolution sharpens MM price expectations, essential to their risk-avoidance roles.

Their roles require firm capital be only temporarily risk-exposed, so are hedged by derivative-securities deals to avoid undesired price changes. The deals' prices and contracts provide a window to MM price expectations.

Information technology via the internet makes investment monitoring and management time and attention efficient despite its increase in frequency.

Once an investment choice is made and buy transaction confirmation is received, a target-price GTC sell order for the confirmed number of shares at the target price or better should be placed. Keeping trade actions entered through the internet on your lap/desk-top or cell phone should avoid trade commission charges. Your broker's internal system should keep you informed of your account's progress.

Your own private calendar record should be kept of the date 63 market days (or 91 calendar days) beyond the trade's confirmation date as a time-limit alert to check if the GTC order has not been executed. If not, then start your exit and reinvestment decision process.

The 3-months time limit is what we find to be a good choice, but may be extended some if desired. Beyond 5-6 months time investments start to work against the process and are not recommended.

For investments guided by this article or others by me target prices will always be found as the high price in the MM forecast range.

Description of Equity Subject Company

"PPG Industries, Inc. manufactures and distributes paints, coatings, and specialty materials worldwide. The company's Performance Coatings segment offers coatings, solvents, adhesives, sealants, sundries, and software for automotive and commercial transport/fleet repair and refurbishing, light industrial coatings, and specialty coatings for signs; and coatings, sealants, transparencies, transparent armor, adhesives, engineered materials, and packaging and chemical management services for commercial, military, regional jet, and general aviation aircraft. The company was incorporated in 1883 and is headquartered in Pittsburgh, Pennsylvania.."

Source: Yahoo Finance

PPG Street analyst estimates

Yahoo Finance

These growth estimates have been made by and are collected from Wall Street analysts to suggest what conventional methodology currently produces. The typical variations across forecast horizons of different time periods illustrate the difficulty of making value comparisons when the forecast horizon is not clearly defined.

Risk and Reward Balances Among NYSE:PPG Competitors

Figure 1

PPG stock hedging forecasts

blockdesk.com

The risk dimension is of real price draw-downs at their most extreme point while being held in previous pursuit of upside rewards similar to the ones currently being seen. They are measured on the red vertical scale. Reward expectations are measured on the green horizontal scale.

Both scales are of percent change from zero to 25%. Any stock or ETF whose present risk exposure exceeds its reward prospect will be above the dotted diagonal line. Capital-gain-attractive to-buy issues are in the directions down and to the right.

Our principal interest is in PPG at location [2], at the right-hand edge of the competitor crowd. A "market index" norm of reward~risk tradeoffs is offered by SPY at [1]. Most appealing by this Figure 1 view for wealth-building investors is PPG.

Comparing competitive features of Specialty Paint Providers

The Figure 1 map provides a good visual comparison of the two most important aspects of every equity investment in the short term. There are other aspects of comparison which this map sometimes does not communicate well, particularly when general market perspectives like those of SPY are involved. Where questions of "how likely' are present other comparative tables, like Figure 2, may be useful.

Yellow highlighting of the table's cells emphasize factors important to securities valuations and the security PPG of most promising of near capital gain as ranked in column [R].

Figure 2

PPG vs peers detailed comparative data

blockdesk.com

(used with permission)

Why do all this math?

Figure 2's purpose is to attempt universally comparable answers, stock by stock, of a) How BIG the prospective price gain payoff may be, b) how LIKELY the payoff will be a profitable experience, c) how SOON it may happen, and d) what price draw-down RISK may be encountered during its active holding period.

Readers familiar with our analysis methods after quick examination of Figure 2 may wish to skip to the next section viewing price range forecast trends for PPG.

Column headers for Figure 2 define investment-choice preference elements for each row stock whose symbol appears at the left in column [A]. The elements are derived or calculated separately for each stock, based on the specifics of its situation and current-day MM price-range forecasts. Data in red numerals are negative, usually undesirable to "long" holding positions. Table cells with yellow fills are of data for the stocks of principal interest and of all issues at the ranking column, [R].

The price-range forecast limits of columns [B] and [C] get defined by MM hedging actions to protect firm capital required to be put at risk of price changes from volume trade orders placed by big-$ "institutional" clients.

[E] measures potential upside risks for MM short positions created to fill such orders, and reward potentials for the buy-side positions so created. Prior forecasts like the present provide a history of relevant price draw-down risks for buyers. The most severe ones actually encountered are in [F], during holding periods in effort to reach [E] gains. Those are where buyers are emotionally most likely to accept losses.

The Range Index [G] tells where today's price lies relative to the MM community's forecast of upper and lower limits of coming prices. Its numeric is the percentage proportion of the full low to high forecast seen below the current market price.

[H] tells what proportion of the [L] demo of prior like-balance forecasts have earned gains by either having price reach its [B] target or be above its [D] entry cost at the end of a 3-month max-patience holding period limit. [ I ] gives the net gains-losses of those [L] experiences.

What makes PPG most attractive in the group at this point in time is its ability to produce capital gains most consistently at its present operating balance between share price risk and reward at the Range Index [G]. At a RI of 1, today's price is at the bottom of its forecast range, with all price expectations only to the upside. Not our expectations, but those of Market-Makers acting in transaction support of Institutional Investment organizations building the values of their typical multi-billion-$ portfolios. Credibility of the [E] upside prospect as evidenced in the [I] payoff at +18% is shown in [N].

Further Reward~Risk tradeoffs involve using the [H] odds for gains with the 100 - H loss odds as weights for N-conditioned [E] and for [F], for a combined-return score [Q]. The typical position holding period [J] on [Q] provides a figure of merit [fom] ranking measure [R] useful in portfolio position preferences. Figure 2 is row-ranked on [R] among alternative candidate securities, with PPG in top rank.

Along with the candidate-specific stocks these selection considerations are provided for the averages of some 3,000 stocks for which MM price-range forecasts are available today, and 20 of the best-ranked (by fom) of those forecasts, as well as the forecast for S&P500 Index ETF (SPY) as an equity-market proxy.

Current-market index SPY is not competitive as an investment alternative. Its Range Index of 26 indicates 3/4ths of its forecast range is to the upside, but little more than half of previous SPY forecasts at this range index produced profitable outcomes.

As shown in column [T] of figure 2, those levels vary significantly between stocks. What matters is the net gain between investment gains and losses actually achieved following the forecasts, shown in column [I]. The Win Odds of [H] tells what proportion of the demo RIs of each stock were profitable. Odds below 80% often have proven to lack reliability.

Recent Forecast Trends of the Primary Subject

Figure 3

PPG daily hedging forecasts trend

blockdesk.com

(used with permission)

Many investors confuse any time-repeating picture of stock prices with typical "technical analysis charts" of past stock price history. These are quite different in their content. Instead, here Figure 3's vertical lines are a daily-updated visual record of price range forecast limits expected in the coming few weeks and months. The heavy dot in each vertical is the stock's closing price on the day the forecast was made.

That market price point makes an explicit definition of the price reward and risk exposure expectations which were held by market participants at the time, with a visual display of their vertical balance between risk and reward.

The measure of that balance is the Range Index (RI).

With today's RI there is 18% upside price change in prospect. Of the prior 43 forecasts like today's RI, 40 have been profitable. The market's actions of prior forecasts became accomplishments of +11% gains in 47 market days.. So history's advantage could be repeated five times or more in a 252 market-day year, which compounds into a CAGR of +72%.

Also please note the smaller low picture in Figure 3. It shows the past 5 year distribution of Range Indexes with the current level visually marked. For PPG nearly all exact past forecasts have been of higher prices and Range Indexes.

Conclusion

Based on direct comparisons with SHW and other Paint producers, there are strong wealth-building reasons to prefer a capital-gain seeking buy in PPG Industries, Inc. (PPG) over other examined alternatives.

Tue, 05 Jul 2022 05:16:00 -0500 en text/html https://seekingalpha.com/article/4521814-ppg-industries-stock-bottom-priced-portfolio-wealth-builders
Killexams : Economics A-Z terms beginning with A

government policy for dealing with monopoly. Antitrust laws aim to stop abuses of market power by big companies and, sometimes, to prevent corporate mergers and acquisitions that would create or strengthen a monopolist. There have been big differences in antitrust policies both among countries and within the same country over time. This has reflected different ideas about what constitutes a monopoly and, where there is one, what sorts of behaviour are abusive.

In the United States, monopoly policy has been built on the Sherman Antitrust Act of 1890. This prohibited contracts or conspiracies to restrain trade or, in the words of a later act, to monopolise commerce. In the early 20th century this law was used to reduce the economic power wielded by so-called "robber barons", such as JP Morgan and John D. Rockefeller, who dominated much of American industry through huge trusts that controlled companies' voting shares. Du Pont chemicals, the railroad companies and Rockefeller's Standard Oil, among others, were broken up. In the 1970s the Sherman Act was turned (ultimately without success) against IBM, and in 1982 it secured the break-up of AT&T's nationwide telecoms monopoly.

In the 1980s a more laissez-faire approach was adopted, underpinned by economic theories from the chicago school. These theories said that the only justification for antitrust intervention should be that a lack of competition harmed consumers, and not that a firm had become, in some ill-defined sense, too big. Some monopolistic activities previously targeted by antitrust authorities, such as predatory pricing and exclusive marketing agreements, were much less harmful to consumers than had been thought in the past. They also criticised the traditional method of identifying a monopoly, which was based on looking at what percentage of a market was served by the biggest firm or firms, using a measure known as the herfindahl-hirschman index. Instead, they argued that even a market dominated by one firm need not be a matter of antitrust concern, provided it was a contestable market.

In the 1990s American antitrust policy became somewhat more interventionist. A high-profile lawsuit was launched against Microsoft in 1998. The giant software company was found guilty of anti-competitive behaviour, which was said to slow the pace of innovation. However, fears that the firm would be broken up, signalling a far more interventionalist American antitrust policy, proved misplaced. The firm was not severely punished.

In the UK, antitrust policy was long judged according to what policymakers decided was in the public interest. At times this approach was comparatively permissive of mergers and acquisitions; at others it was less so. However, in the mid-1980s the UK followed the American lead in basing antitrust policy on whether changes in competition harmed consumers. Within the rest of the european union several big countries pursued policies of building up national champions, allowing chosen firms to enjoy some monopoly power at home which could be used to make them more effective competitors abroad. However, during the 1990s the European Commission became increasingly active in antitrust policy, mostly seeking to promote competition within the EU.

In 2000, the EU controversially blocked a merger between two American firms, GE and Honeywell; the deal had already been approved by America's antitrust regulators. The controversy highlighted an important issue. As globalisation increases, the relevant market for judging whether market power exists or is being abused will increasingly cover far more territory than any one single economy. Indeed, there may be a need to establish a global antitrust watchdog, perhaps under the auspices of the world trade organisation.

Sun, 19 Dec 2021 07:14:00 -0600 en text/html https://www.economist.com/economics-a-to-z
000-839 exam dump and training guide direct download
Training Exams List