Read and Memorize these C2180-278 questions and answers and forget worries

Accessibility of genuine, legitimate, refreshed, and most recent C2180-278 test prep is a large issue on the web. We have conquered the circumstance by gathering C2180-278 pdf download and questions and answers and making a data set for our contender to download from and remember. These C2180-278 Actual Questions questions and answers are adequate to finish the test at the first endeavor.

Exam Code: C2180-278 Practice test 2022 by team
IBM Worklight V6.0 Mobile Application Development
IBM Application test Questions
Killexams : IBM Application test Questions - BingNews Search results Killexams : IBM Application test Questions - BingNews Killexams : Best Computer Hardware Certifications 2019

Becoming a computer technician is a great point of entry into the IT field. In addition, computer hardware certifications can help demonstrate your knowledge and competency in maintaining computers, mobile devices, printers and more. Below, you’ll find our pick of six computer hardware certifications to help you get your IT career off the ground.

Although we cover our favorite hardware certifications here, the idea that hardware can operate independently of software (or vice versa) isn’t true. If you dig into the curriculum for any specific hardware-related certs in any depth, you’ll quickly realize that software is in control of hardware.

Software comes into play for installation, configuration, maintenance, troubleshooting and just about any other activity you can undertake with hardware. The hardware label simply indicates that devices are involved, not that hardware is all that’s involved.

Job board search results (in alphabetical order, by certification)

Certification SimplyHired Indeed LinkedIn Jobs Total
A+ (CompTIA) 1,566 2,396 2,282 2,187 8,431
ACMT (Apple) 134 258 196 44 632
BICSI Technician (BICSI) 384 657 30 92 1,163
CCT (Cisco) 473 826 601 722 2,622
RCDD (BICSI) 276 378 377 104 1,135
Server+ (CompTIA) 2,318 3,064 1,250 1,069 7,701

Differing factors, such as specific job role, locality and experience level, may impact salary potential. In general, hardware professionals can expect to earn somewhere in the mid-$60,000s. SimplyHired reports average earnings at $71,946 for IT technicians, with highs reported at almost $116,000. The average national salary for computer hardware technicians ranges from about $31,000 to more than $53,000. However, some certifications command higher salaries. Certification Magazine’s “Annual Salary Survey” (Salary Survey 2018) average salaries for CompTIA Server+ at $98,060 and the A+ credential at $97,730.

CompTIA A+

The CompTIA A+ certification is the granddaddy and best known of all hardware credentials. For anyone serious about working with PCs, laptops, mobile devices, printers or operating systems, the A+ should at least be on their radar, if not in their game plan.

Since the first A+ credential was awarded in March 1993, the program continues to draw active interest and participation. With more than 1 million IT professionals now possessing the A+ credential, it is something of a checkbox item for PC technicians and support professionals. It also appears in a great many job postings or advertisements.

A+ is also ISO 17024 compliant and accredited by ANSI. Thus, this credential must be renewed every three years in keeping with concomitant requirements for continuing education or regular examinations to maintain certification currency. Some 20 continuing education units (CEUs) are required for renewal.

Earning an A+ from CompTIA involves passing two exams: 220-901 and 220-902. test 220-901 focuses on hardware, networking, mobile devices, connectivity and troubleshooting. test 220-902 draws on knowledge of installing and configuring common operating systems (Windows, Linux, OS X, Android and iOS). It also covers issues related to cloud computing, security and operational procedures. Candidates will find a variety of question formats, including standard multiple-choice, drag-and-drop and performance-based questions on these exams.

Candidates who earn the A+ often find themselves in job roles that include technical support specialist, field service technician, IT support technician, IT support administrator or IT support specialist. The A+ is recognized by the U.S. Department of Defense (in DoD Directive 8140/8570.01-M). Also, technology companies, such as Ricoh, Nissan, Blue Cross Blue Shield, Dell, HP and Intel, require staff to earn the A+ certification to fill certain positions.

The A+ certification encompasses broad coverage of PC hardware and software, networking and security in its overall technical scope.

A+ Facts and Figures

Certification name  CompTIA A+
Prerequisites & required courses 9-12 months of experience recommended
Number of exams  Two exams (maximum of 90 questions, 90 minutes): 220-901 and 220-902 (CompTIA Academy Partners use the same numbers)
Cost per exam  $211 per exam. Exams administered by Pearson VUE. test vouchers available at CompTIA
Self-study materials

CompTIA offers several self-study materials, including test objectives, demo questions and study guides ($178 for the eBook $198 for the print edition), as well as classroom and e-learning training opportunities. Credential seekers may also want to check out the CertMaster online learning tool. Links to CompTIA training materials may be found on the certification webpage.

Recommended books:

CompTIA A+ 220-901 and 220-902 test Cram, 1st Edition, by David L. Prowse, published Jan. 30, 2016, Pearson IT Certifications, test Cram Series, ISBN-10: 0789756315, ISBN-13: 978-0789756312

CompTIA A+ Certification All-in-One test Guide, 9th Edition (Exams 220-901 and 220-902) by Michael Meyers, published Jan. 4, 2016, McGraw-Hill Education, ISBN-10: 1125958951X, ISBN-13: 978-1259589515

ACMT: Apple Certified Macintosh Technician

Given the popularity of Apple products and platforms, and widespread use of Macintosh computers in homes and businesses of all sizes, there’s demand galore for Mac-savvy technicians.

The AppleCare Mac Technician (ACMT) 2018 credential is Apple’s latest hardware-related ACMT certification. (The credential was formerly called the Apple Certified Macintosh Technician or Apple Certified Mac Technician.) Per Apple, the ACMT 2018 “qualifies a technician to repair all the Mac products that were covered by prior ACMT certifications, plus all other Mac products that were produced before April 2018.” Technicians with the ACMT certification who work at an Apple-authorized service facility are allowed to perform service and repairs.

The ACMT’s two required exams are the Apple Service Fundamentals and the ACMT 2018 Mac Service Certification. Service Fundamentals focuses on customer experience skills, ESD and safety, troubleshooting and deductive reasoning, and product knowledge. The Mac Service test covers troubleshooting and repair of Mac hardware (mainly Apple iMac and MacBook Pro systems). Note that the Apple Service Fundamentals test is also required for the Apple Certified iOS Technician (ACiT) 2018 certification.

The ACMT 2018 is a permanent credential and does not require annual recertification. However, as new products are added to the Apple portfolio, AppleCare will make associated courses available through Apple Technical Learning Administration System (ATLAS). You must complete these courses to service new products.

ACMT Facts and Figures

Certification name AppleCare Mac Technician (ACMT) 2017
Prerequisites & required courses AppleCare Technician Training recommended
Number of exams Two exams (must be taken in this order):

Apple Service Fundamentals test (SCV-17A) OR Apple Service

Fundamentals test (SVC-18A)


ACMT 2018 Mac Service Certification test (MAC-18A) Each exam: 70 questions, 2 hours, 80 percent passing score

Tests administered by Pearson VUE; Apple Tech ID number required

Cost per exam TBD
Self-study materials Self-paced training: Apple Technical Learning Administration System (ATLAS)

AppleCare Technician Training, $299

Instructor-led training courses: LearnQuest

BICSI Technician and Registered Communications Distribution Designer

BICSI is a professional association that supports the information and communications technology (ICT) industry, mainly in the areas of voice, data, audio and video, electronic safety and security, and project management. BICSI offers training, certification and education to its 23,000-plus members, many of who are designers, installers and technicians.

BICSI offers several certifications aimed at ICT professionals, who mainly deal with cabling and related technologies. Two credentials, the BICSI Technician and the BICSI Registered Communications Distribution Designer (RCDD) are pertinent (and popular) in this story.

The BICSI Technician recognizes individuals who lead an installation group or team, perform advanced testing and troubleshooting of cable installations, evaluate cabling requirements, recommend solutions based on standards and best practices, and roll out new and retrofit projects. Technicians must be well versed in both copper and fiber cabling.

Candidates need a good deal of knowledge about the hardware, networking devices and communications equipment to which they connect cables.

To earn the credential, candidates must pass a single two-part test consisting of a hands-on practical evaluation and a written exam. In addition, candidates must possess at least three years of verifiable ICT industry installation experience within the past five years. Credentials are valid for three years. Certification holders must earn 18 hours of continuing education credits (CECs) in each three-year credentialing cycle and pay the current renewal fees to maintain this credential.

Interested candidates should also check out other BICSI certifications, such as the Installer 1 (INST1), Installer 2 Copper (INSTC) and Installer 2 Optical Fiber (INSTF).

An advanced credential, the Registered Communications Distribution Designer (RCDD) is so well respected that the Department of Defense Unified Facilities requires RCDD for all telecom-related design projects. The RCDD is geared toward experienced ICT practitioners with at least five years of ICT design experience. Alternatively, candidates who do not have the requisite experience but who possess at least two years of design experience plus three years of knowledge “equivalents” (combination of approved education, certifications or education), may also sit for the exam. All experience must have been within the preceding 10 years.

RCDD candidates should be able to create and prepare system design specifications and plans, as well as recommended best practices for security design requirements, for business automation systems. RCDDs are also well versed in data center, cabling systems and design for wireless, network, and electronic security systems.

To earn the credential, candidates must meet the experience requirements, submit the application plus credentialing fees, along with a current resume. In addition, candidates must submit four letters of reference two of which much be from current or former clients. One reference may be personal while the remaining references must come from the candidate’s employer.

Other advanced BICSI certifications include the  Outside Plant (OSP) Designer, Data Center Design Consultant (DCDC) and Registered Telecommunication Project Manager (RTPM).

BICSI Technician Facts and Figures

Certification name BICSI Technician
Prerequisites & required courses Three or more years of verifiable ICT industry installation experience (must be within past five years to qualify)

Adhere to the BICSI Code of Ethics and Standards of Conduct

Physical requirements: Distinguish between colors, stand for extended periods, lift and carry up to 50 pounds, climb ladders, and possess manual dexterity necessary to perform fine motor tasks

Technician test prereqs: Both the Installer 2, Copper and Installer 2, Optical Fiber credentials OR the Installer 2 credential

Note: There are no additional credentials required for candidates attempting the Technician Skip-Level exam.

Recommended prerequisites:

50 hours review of BICSI Information Technology Systems Installation Methods Manual (ITSIMM)

TE350: BICSI Technician Training course ($2,545)
IN225: Installer 2 Copper Training course ($2,305)
IN250: Installer 2 Optical Fiber Training course ($2,505)

Number of exams One two-part exam, including written test (140 multiple-choice questions*) and hands-on, performance-based test (hands-on performance test delivered last day of TE350 course; written test administered the day after the completion of the TE350 course)

*If the candidate doesn’t have both the Copper and Optical Fiber Installer 2 credentials or an Installer 2 credential, the written Skip Level test will have 170 questions.

Cost per exam $295 (non-refundable application fee must be received by BICSI 15 days prior to exam; retake fee of $130 applies)
Self-study materials Information Technology System Installation Methods Manual, 7th edition electronic download, $220 member/$240 non-member; print and obtain combo, $260 member/$290 non-member; printed manual, $220 member/$240 non-member, Web-based training through BICSI CONNECT

BICSI Registered Communications Distribution Designer (RCDD) Facts and Figures

Certification name BICSI Registered Communications Distribution Designer (RCDD)
Prerequisites & required courses

Five or more years of verifiable ICT industry design experience (must be within past 10 years to qualify)


Two or more years of verifiable ICT design experience (must be within the past ten years) plus three additional years of ICT equivalents from approved education, experience, or ICT licenses or certification (CCNA, for example)

Adhere to the BICSI Code of Ethics and Standards of Conduct

Recommended prerequisites:
Minimum of 125-150 hours review of BICSI’s Telecommunications Distribution Methods Manual (TDMM)

DD101: Foundations of Telecommunications Distribution Design ($1,030) (BICSI  CONNECT online course)

DD102: Designing Telecommunications Distribution Systems ($2,815)

125-150 hours of TDMM study

TDMM flash cards ($275)

RCDD Test Preparation Course ($925) (BICSI CONNECT online course)

Number of exams One test (100 questions, 2.5 hours)
Cost per exam $495 BICSI member/$725 non-member application fee, (non-refundable application fee must be received by BICSI 15 days prior to exam; retake fee of $225 BISCI member/$340 non-member)
Self-study materials

Telecommunications Distribution Methods Manual, 13th edition (TDMM) electronic obtain ($310 member/$380 non-member; print and obtain combo, $350 member/$435 non-member; printed manual, $310 member/$380 non-member)

Web-based training through BICSI CONNECT

CTT Routing & Switching: Cisco Certified Technician Routing & Switching

Cisco certifications are valued throughout the tech industry. The Cisco Certified Technician, or CCT, certification is an entry-level credential that demonstrates a person’s ability to support and maintain Cisco networking devices at a customer site.

The Routing & Switching credential best fits our list of best computer hardware certifications, and it serves as an essential foundation for supporting Cisco devices and systems in general.

The CCT requires passing a single exam. Topics include identification of Cisco equipment and related hardware, such as switches and routers, general networking and service knowledge, working with the Cisco Technical Assistance Center (TAC), and describing Cisco IOS software operating modes. Candidates should also have a working knowledge of Cisco command-line interface (CLI) commands for connecting to and remotely servicing Cisco products.

CCT Routing & Switching Facts and Figures

Certification name Cisco Certified Technician (CCT) Routing & Switching
Prerequisites & required courses


Recommended training: Supporting Cisco Routing and Switching Network Devices (RSTECH) ($299)

Number of exams One: 640-692 RSTECH (60-70 questions, 90 minutes)
Cost per exam 


Exam administered by Pearson VUE.

Self-study materials Cisco Study Material page provides links to the course, study groups, test tutorials, and other related content, including test syllabus, training videos and seminars.

CompTIA Server+

CompTIA also offers a server-related certification, which steps up from basic PC hardware, software, and networking Topics to the more demanding, powerful, and expensive capabilities in the same vein usually associated with server systems.

The CompTIA Server+ credential goes beyond basic Topics to include coverage of more advanced storage systems, IT environments, virtualization, and disaster recovery and business continuity topics. It also puts a strong emphasis on best practices and procedures for server problem diagnosis and troubleshooting. Although Server+ is vendor-neutral in coverage, organizations such as HP, Dell, Intel, Microsoft, Xerox, Lenovo and HP use Server+ credentialed technicians.

Those who work or want to work in server rooms or data centers, with and around servers on a regular basis, will find the Server+ credential worth studying for and earning. It can also be a steppingstone into vendor-specific server technician training programs at such companies as those mentioned above, or with their authorized resellers and support partners.

Note that the CompTIA Server+ test is still listed on that organization’s website as “good for life,” meaning it does not impose a renewal or continuing education requirement on its holders. The SK0-004 launched on July 31, 2015. Typically, exams are available for at least two years. If CompTIA’s revision history for Server+ is any guide to future updates and revisions, then it’s likely that we’ll see a new test making an appearance sometime before the end of 2019.

Server+ Facts and Figures

Certification name  CompTIA Server+
Prerequisites & required courses  No prerequisites

Recommended experience includes CompTIA A+ certification plus a minimum of 18-24 months IT-related experience

Number of exams  One: SK0-004 (100 questions, 90 minutes, 750 out of 900 passing score)
Cost per exam $302. test administered by Pearson VUE. test vouchers available at CompTIA.
Self-study materials

CompTIA offers a number of self-study materials, including test objectives, its CertMaster online study tool, demo questions, books and more. Formal training courses are also offered. Links to CompTIA training courses may be found on the certification web page. Additional resources may also be found at the CompTIA Marketplace.

CompTIA Server+ Study Guide: test SK0-004, 1st edition, by Troy McMillan, published June 20, 2016, Sybex, ISBN-10: 1119137829, ISBN-13: 978-1119137825

Beyond the Top 5: More hardware certifications

There are many more hardware-oriented certifications available that you might want to consider. As you get into IT and start to develop a sense of your own interests and observe the hardware systems and solutions around, you’ll be able to dig deeper into this arena.

You can investigate all the major system vendors (including HP, Dell, IBM, and other PC and server makers) as well as networking and infrastructures companies (such as Juniper and Fortinet) to find hardware-related training and certification to occupy you throughout a long and successful career.

Although ExpertRating offers many credentials, we rejected them after viewing several complaints regarding the general quality of the courses. Obviously, such complaints are from disgruntled customers but were enough to make us proceed with caution.

This is also an area where constant change in tools and technology is the norm. That means a course of lifelong learning will be essential to help you stay current on what’s in your working world today and likely to show up on the job soon.

Tue, 28 Jun 2022 12:00:00 -0500 en text/html
Killexams : Answering the top 10 questions about supercloud

As we exited the isolation economy last year, we introduced supercloud as a term to describe something new that was happening in the world of cloud computing.

In this Breaking Analysis, we address the ten most frequently asked questions we get on supercloud. Today we’ll address the following frequently asked questions:

1. In an industry full of hype and buzzwords, why does anyone need a new term?

2. Aren’t hyperscalers building out superclouds? We’ll try to answer why the term supercloud connotes something different from a hyperscale cloud.

3. We’ll talk about the problems superclouds solve.

4. We’ll further define the critical aspects of a supercloud architecture.

5. We often get asked: Isn’t this just multicloud? Well, we don’t think so and we’ll explain why.

6. In an earlier episode we introduced the notion of superPaaS  – well, isn’t a plain vanilla PaaS already a superPaaS? Again – we don’t think so and we’ll explain why.

7. Who will actually build (and who are the players currently building) superclouds?

8. What workloads and services will run on superclouds?

9. What are some examples of supercloud?

10. Finally, we’ll answer what you can expect next on supercloud from SiliconANGLE and theCUBE.

Why do we need another buzzword?

Late last year, ahead of Amazon Web Services Inc.’s re:Invent conference, we were inspired by a post from Jerry Chen called Castles in the Cloud. In that blog he introduced the idea that there were submarkets emerging in cloud that presented opportunities for investors and entrepreneurs, that the big cloud vendors weren’t going to suck all the value out of the industry. And so we introduced this notion of supercloud to describe what we saw as a value layer emerging above the hyperscalers’ “capex gift.”

It turns out that we weren’t the only ones using the term, as both Cornell and MIT have used the phrase in somewhat similar but different contexts.

The point is something new was happening in the AWS and other ecosystems. It was more than infrastructure as a service and platform as a service and wasn’t just software as a service running in the cloud.

It was a new architecture that integrates infrastructure, unique platform attributes and software to solve new problems that the cloud vendors in our view weren’t addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud.

In addition, we felt this trend pointed to structural change going on at the industry level that supercloud metaphorically was highlighting.

So that’s the background on why we felt a new catchphrase was warranted. Love it or hate it… it’s memorable.

Industry structures have always mattered in tech

To that last point about structural industry transformation: Andy Rappaport is sometimes credited with identifying the shift from the vertically integrated mainframe era to the horizontally fragmented personal computer- and microprocessor-based era in his Harvard Business Review article from 1991.

In fact, it was actually David Moschella, an International Data Corp. senior vice president at the time, who introduced the concept in 1987, a full four years before Rappaport’s article was published. Moschella, along with IDC’s head of research Will Zachmann, saw that it was clear Intel Corp., Microsoft Corp., Seagate Technology and other would replace the system vendors’ dominance.

In fact, Zachmann accurately predicted in the late 1980s the demise of IBM, well ahead of its epic downfall when the company lost approximately 75% of its value. At an IDC Briefing Session (now called Directions), Moschella put forth a graphic that looked similar to the first two concepts on the chart below.

We don’t have to review the shift from IBM as the epicenter of the industry to Wintel – that’s well-understood.

What isn’t as widely discussed is a structural concept Moschella put out in 2018 in his book “Seeing Digital,” which introduced the idea of the Matrix shown on the righthand side of this chart. Moschella posited that a new digital platform of services was emerging built on top of the internet, hyperscale clouds and other intelligent technologies that would define the next era of computing.

He used the term matrix because the conceptual depiction included horizontal technology rows, like the cloud… but for the first time included connected industry columns. Moschella pointed out that historically, industry verticals had a closed value chain or stack of research and development, production, distribution, etc., and that expertise in that specific vertical was critical to success. But now, because of digital and data, for the first time, companies were able to jump industries and compete using data. Amazon in content, payments and groceries… Apple in payments and content… and so forth. Data was now the unifying enabler and this marked a changing structure of the technology landscape.

Listen to David Moschella explain the Matrix and its implications on a new generation of leadership in tech.

So the term supercloud is meant to imply more than running in hyperscale clouds. Rather, it’s a new type of digital platform comprising a combination of multiple technologies – enabled by cloud scale – with new industry participants from financial services, healthcare, manufacturing, energy, media and virtually all industries. Think of it as kind of an extension of “every company is a software company.”

Basically, thanks to the cloud, every company in every industry now has the opportunity to build their own supercloud. We’ll come back to that.

Aren’t hyperscale clouds superclouds?

Let’s address what’s different about superclouds relative to hyperscale clouds.

This one’s pretty straightforward and obvious. Hyperscale clouds are walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their cloud, so they’re meeting customers where their data lives with initiatives such Amazon Outposts and Azure Arc and Google Anthos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, costs and performance they can deliver. The more complex the environment, the more difficult to deliver on their promises and the less margin left for them to capture.

Will the hyperscalers get more serious about cross cloud services? Maybe, but they have plenty of work to do within their own clouds. And today at least they appear to be providing the tools that will enable others to build superclouds on top of their platforms. That said, we never say never when it comes to companies such as AWS. And for sure we see AWS delivering more integrated digital services such as Amazon Connect to solve problems in a specific domain, call centers in this case.

What problems do superclouds solve?

We’ve all seen the stats from IDC or Gartner or whomever that customers on average use more than one cloud. And we know these clouds operate in disconnected silos for the most part. That’s a problem because each cloud requires different skills. The development environment is different, as is the operating environment, with different APIs and primitives and management tools that are optimized for each respective hyperscale cloud. Their functions and value props don’t extend to their competitors’ clouds. Why would they?

As a result, there’s friction when moving between different clouds. It’s hard to share data, move work, secure and govern data, and enforce organizational policies and edicts across clouds.

Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations and share data safely irrespective of location.

Pretty straightforward, but nontrivial, which is why we often ask company chief executives and execs if stock buybacks and dividends will yield as much return as building out superclouds that solve really specific problems and create differentiable value for their firms.

What are the critical attributes of a supercloud?

Let’s dig in a bit more to the architectural aspects of supercloud. In other words… what are the salient attributes that define supercloud?

First, a supercloud runs a set of specific services, designed to solve a unique problem. Superclouds offer seamless, consumption-based services across multiple distributed clouds.

Supercloud leverages the underlying cloud-native tooling of a hyperscale cloud but it’s optimized for a specific objective that aligns with the problem it’s solving. For example, it may be optimized for cost or low latency or sharing data or governance or security or higher performance networking. But the point is, the collection of services delivered is focused on unique value that isn’t being delivered by the hyperscalers across clouds.

A supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud and using its own specific platform-as-a-service tooling, creates a common experience across clouds for developers and users. In other words, the superPaaS ensures that the developer and user experience is identical, irrespective of which cloud or location is running the workload.

And it does so in an efficient manner, meaning it has the metadata knowledge and management that can optimize for latency, bandwidth, recovery, data sovereignty or whatever unique value the supercloud is delivering for the specific use cases in the domain.

A supercloud comprises a superPaaS capability that allows ecosystem partners to add incremental value on top of the supercloud platform to fill gaps, accelerate features and innovate. A superPaaS can use open tooling but applies those development tools to create a unique and specific experience supporting the design objectives of the supercloud.

Supercloud services can be infrastructure-related, application services, data services, security services, users services, etc., designed and packaged to bring unique value to customers… again that the hyperscalers are not delivering across clouds or on-premises.

Finally, these attributes are highly automated where possible. Superclouds take a page from hyperscalers in terms of minimizing human intervention wherever possible, applying automation to the specific problem they’re solving.

Isn’t supercloud just another term for multicloud?

What we’d say to that is: Perhaps, but not really. Call it multicloud 2.0 if you want to invoke a commonly used format. But as Dell’s Chuck Whitten proclaimed, multicloud by design is different than multicloud by default.

What he means is that, to date, multicloud has largely been a symptom of multivendor… or of M&A. And when you look at most so-called multicloud implementations, you see things like an on-prem stack wrapped in a container and hosted on a specific cloud.

Or increasingly a technology vendor has done the work of building a cloud-native version of its stack and running it on a specific cloud… but historically it has been a unique experience within each cloud with no connection between the cloud silos. And certainly not a common developer experience with metadata management across clouds.

Supercloud sets out to build incremental value across clouds and above hyperscale capex that goes beyond cloud compatibility within each cloud. So if you want to call it multicloud 2.0, that’s fine.

We choose to call it supercloud.

Isn’t plain old PaaS already supercloud?

Well, we’d say no. That supercloud and its corresponding superPaaS layer gives the freedom to store, process, manage, secure and connect islands of data across a continuum with a common developer experience across clouds.

Importantly, the sets of services are designed to support the supercloud’s objectives – e.g., data sharing or data protection or storage and retrieval or cost optimization or ultra-low latency, etc. In other words, the services offered are specific to that supercloud and will vary by each offering. OpenShift, for example, can be used to construct a superPaaS but in and of itself isn’t a superPaaS. It’s generic.

The point is that a supercloud and its inherent superPaaS will be optimized to solve specific problems such as low latency for distributed databases or fast backup and recovery and ransomware protection — highly specific use cases that the supercloud is designed to solve for.

SaaS as well is a subset of supercloud. Most SaaS platforms either run in their own cloud or have bits and pieces running in public clouds (e.g. analytics). But the cross-cloud services are few and far between or often nonexistent. We believe SaaS vendors must evolve and adopt supercloud to offer distributed solutions across cloud platforms and stretching out to the near and far edge.

Who is building superclouds?

Another question we often get is: Who has a supercloud and who is building a supercloud? Who are the contenders?

Well, most companies that consider themselves cloud players will, we believe, be building superclouds. Above is a common Enterprise Technology Research graphic we like to show with Net Score or spending momentum on the Y axis and Overlap or pervasiveness in the ETR surveys on the X axis. This is from the April survey of well over 1,000 chief executive officers and information technology buyers. And we’ve randomly chosen a number of players we think are in the supercloud mix and we’ve included the hyperscalers because they are the enablers.

We’ve added some of those nontraditional industry players we see building superclouds such as Capital One, Goldman Sachs and Walmart, in deference to Moschella’s observation about verticals. This goes back to every company being a software company. And rather than pattern-matching an outdated SaaS model we see a new industry structure emerging where software and data and tools specific to an industry will lead the next wave of innovation via the buildout of intelligent digital platforms.

We’ve talked a lot about Snowflake Inc.’s Data Cloud as an example of supercloud, as well as the momentum of Databricks Inc. (not shown above). VMware Inc. is clearly going after cross-cloud services. Basically every large company we see is either pursuing supercloud initiatives or thinking about it. Dell Technologies Inc., for example, showed Project Alpine at Dell Technologies World – that’s a supercloud in development. Snowflake introducing a new app dev capability based on its SuperPaaS (our term, of course, it doesn’t use the phrase), MongoDB Inc., Couchbase Inc., Nutanix Inc., Veeam Software, CrowdStrike Holdings Inc., Okta Inc. and Zscaler Inc. Even the likes of Cisco Systems Inc. and Hewlett Packard Enterprise Co., in our view, will be building superclouds.

Although ironically, as an aside, Fidelma Russo, HPE’s chief technology officer, said on theCUBE she wasn’t a fan of cloaking mechanisms. But when we spoke to HPE’s head of storage services, Omer Asad, we felt his team is clearly headed in a direction that we would consider supercloud. It could be semantics or it could be that parts of HPE are in a better position to execute on supercloud. Storage is an obvious starting point. The same can be said of Dell.

Listen to Fidelma Russo explain her aversion to building a manager of managers.

And we’re seeing emerging companies like Aviatrix Systems Inc. (network performance), Starburst Data Inc. (self-service analytics for distributed data), Clumio Inc. (data protection – not supercloud today but working on it) and others building versions of superclouds that solve a specific problem for their customers. And we’ve spoken to independent software vendors such as Adobe Systems Inc., Automatic Data Processing LLC and UiPath Inc., which are all looking at new ways to go beyond the SaaS model and add value within cloud ecosystems, in particular building data services that are unique to their value proposition and will run across clouds.

So yeah – pretty much every tech vendor with any size or momentum and new industry players are coming out of hiding and competing… building superclouds. Many that look a lot like Moschella’s matrix with machine intelligence and artificial intelligence and blockchains and virtual reality and gaming… all enabled by the internet and hyperscale clouds.

It’s moving fast and it’s the future, in our opinion, so don’t get too caught up in the past or you’ll be left behind.

What are some examples of superclouds?

We’ve given many in the past, but let’s try to be a bit more specific. Below we cite a few and we’ll answer two questions in one section here: What workloads and services will run in superclouds and what are some examples?

Analytics. Snowflake is the furthest along with its data cloud in our view. It’s a supercloud optimized for data sharing, governance, query performance, security, ecosystem enablement and ultimately monetization. Snowflake is now bringing in new data types and open-source tooling and it ticks the attribute boxes on supercloud we laid out earlier.

Converged databases. Running transaction and analytics workloads. Take a look at what Couchbase is doing with Capella and how it’s enabling stretching the cloud to the edge with Arm-based platforms and optimizing for low latency across clouds and out to the edge.

Document database workloads. Look at MongoDB – a developer-friendly platform that with Atlas is moving to a supercloud model running document databases very efficiently. Accommodating analytic workloads and creating a common developer experience across clouds.

Data science workloads. For example, Databricks is bringing a common experience for data scientists and data engineers driving machine intelligence into applications and fixing the broken data lake with the emergence of the lakehouse.

General-purpose workloads. For example, VMware’s domain. Very clearly there’s a need to create a common operating environment across clouds and on-prem and out to the edge and VMware is hard at work on that — managing and moving workloads, balancing workloads and being able to recover very quickly across clouds.

Network routing. This is the primary focus of Aviatrix, building what we consider a supercloud and optimizing network performance and automating security across clouds.

Industry-specific workloads. For example, Capital One announcing its cost optimization platform for Snowflake – piggybacking on Snowflake’s supercloud. We believe it’s going to test that concept outside its own organization and expand across other clouds as Snowflake grows its business beyond AWS. Walmart Inc. is working with Microsoft to create an on-prem to Azure experience – yes, that counts. We’ve written about what Goldman is doing and you can bet dollars to donuts that Oracle Corp. will be building a supercloud in healthcare with its Cerner acquisition.

Supercloud is everywhere you look. Sorry, naysayers. It’s happening.

What’s next from theCUBE?

With all the industry buzz and debate about the future, John Furrier and the team at SiliconANGLE have decided to host an event on supercloud. We’re motivated and inspired to further the conversation. TheCUBE on Supercloud is coming.

On Aug. 9 out of our Palo Alto studios we’ll be running a live program on the topic. We’ve reached out to a number of industry participants — VMware, Snowflake, Confluent, Sky High Security, Hashicorp, Cloudflare and Red Hat — to get the perspective of technologists building superclouds.

And we’ve invited a number of vertical industry participants in financial services, healthcare and retail that we’re excited to have on along with analysts, thought leaders and investors.

We’ll have more details in the coming weeks, but for now if you’re interested please reach out to us with how you think you can advance the discussion and we’ll see if we can fit you in.

So mark your calendars and stay tuned for more information.

Keep in touch

Thanks to Alex Myerson, who does the production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.

Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email, DM @dvellante on Twitter and comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at

Here’s the full video analysis:

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Image: Stock

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Sat, 09 Jul 2022 07:33:00 -0500 en-US text/html
Killexams : Oracle Certification Guide: Overview and Career Paths

Oracle offers a multitude of hardware and software solutions designed to simplify and empower IT. Perhaps best known for its premier database software, the company also offers cloud solutions, servers, engineered systems, storage and more. Oracle has more than 430,000 customers in 175 countries, about 138,000 employees and exceeds $37.7 billion in revenue.

Over the years, Oracle has developed an extensive certification program. Today, it includes six certification levels that span nine different categories with more than 200 individual credentials. Considering the depth and breadth of this program, and the number of Oracle customers, it’s no surprise that Oracle certifications are highly sought after.

[For more information read our Oracle CRM review, and our review of Oracle’s accounting suite.]

Oracle certification program overview

Oracle’s certification program is divided into these nine primary categories:

  • Oracle Applications
  • Oracle Cloud
  • Oracle Database
  • Oracle Enterprise Management
  • Oracle Industries
  • Oracle Java and Middleware
  • Oracle Operating Systems
  • Oracle Systems
  • Oracle Virtualization

Additionally, Oracle’s credentials are offered at six certification levels:

  • Junior Associate
  • Associate
  • Professional
  • Master
  • Expert
  • Specialist

Most Oracle certification exams are proctored, cost $245, and contain a mix of scored and unscored multiple-choice questions. Candidates may take proctored exams at Pearson VUE, although some exams are offered at Oracle Testing Centers in certain locations. Some exams, such as Oracle Database 12c: SQL Fundamentals (1Z0-061) and Oracle Database 11g: SQL Fundamentals (1Z0-051), are also available non-proctored and may be taken online. Non-proctored exams cost $125. Check the Oracle University Certification website for details on specific exams.

Oracle Applications and Cloud certifications

The Oracle Applications certification category offers more than 60 individual credentials across 13 products or product groups, such as Siebel, E-Business Suite, Hyperion, JD Edwards EnterpriseOne and PeopleSoft. The majority of these certifications confer Certified Implementation Specialist for some specific application, with various Certified Expert credentials also available. The Application certifications aim at individuals with expertise in selling and implementing specific Oracle solutions.

Oracle’s latest certification category is Oracle Cloud, which covers Java Cloud as well as a number of Oracle Cloud certifications, including Oracle Database Cloud. Cloud certs fall into seven sub-categories:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS), including Data Management, Application Development, Management Cloud and Mobile Cloud Service
  • Software as a Service (SaaS) – Oracle Customer Experience Cloud, including Service, Sales, Marketing and CPQ Cloud
  • Software as a Service (SaaS) – Oracle Customer Experience Cloud, including Service, Sales, Marketing, CPQ Cloud, and the rest of their CRM software offering

  • Software as a Services – Oracle Enterprise Resource Planning Cloud, including Financials, Project Portfolio Management, Procurement and Risk Management Cloud

  • Software as a Service – Oracle Human Capital Management Cloud, including Workforce Rewards, Payroll, Talent Management and Global Human Resources Cloud
  • Software as a Service – Oracle Supply Chain Management Cloud, including Order Management, Product Master Data Management, Product Lifecycle Management, Manufacturing, Inventory Management, Supply Chain Planning and Logistics Cloud

These credentials recognize individuals who deploy applications, perform administration or deliver customer solutions in the cloud. Credentials mostly include Associate and Certification Implementation Specialists, with one Mobile Developer credential offered plus a professional-level Oracle Database Cloud Administrator.

Oracle Database certifications

Certifications in Oracle’s Database category are geared toward individuals who develop or work with Oracle databases. There are three main categories: Database Application Development, MySQL and Oracle Database.

Note: Oracle Database 12c was redesigned for cloud computing (and is included in both the Cloud and Database certification categories). The current version is Oracle Database 12c R2, which contains additional enhancements for in-memory databases and multitenant architectures. MySQL 5.6 has been optimized for performance and storage, so it can handle bigger data sets.

Whenever a significant version of either database is released, Oracle updates its certifications exams over time. If an test isn’t available for the latest release, candidates can take a previous version of the test and then an updated test when it becomes available. Though MySQL 5.6 certifications and exams are still available for candidates supporting that version, the new MySQL 5.7 certification track may be more appropriate for those just starting on their MySQL certification journeys.

Oracle currently offers the Oracle Database Foundations Certified Junior Associate, Oracle Certified Associate (OCA), Oracle Certified Professional (OCP), Oracle Certified Master (OCM), Oracle Certified Expert (OCE) and Specialist paths for Oracle Database 12c. In addition, Oracle offers the OCA credential for Oracle Database 12c R2 and an upgrade path for the OCP credential. Because many of these certifications are also popular within the Oracle Certification Program, we provide additional test details and links in the following sections.

Other database certifications

Oracle Enterprise Management Certifications

The Oracle Enterprise Manager Certification path offers candidates the opportunity to demonstrate their skills in application, middleware, database and storage management. The Oracle Enterprise Manager 12c Certified Implementation Specialist test (1Z0-457) certifies a candidate’s expertise in physical, virtual and cloud environments, as well as design, installation, implementation, reporting, and support of Oracle Enterprise Manager.

Oracle Database Foundations Certified Junior Associate

The Oracle Database Foundation Certified Junior Associate credential targets those who’ve participated in the Oracle Academy through a college or university program, computer science and database teachers, and individuals studying databases and computer science. As a novice-level credential, the Certified Junior Associate is intended for individuals with limited hands-on experience working on Oracle Database products. To earn this credential, candidates must pass the Oracle Database Foundations (novice-level exam) (1Z0-006).

Oracle Certified Associate (OCA) – Oracle Database 12c Administrator

The OCA certification measures the day-to-day operational management database skills of DBAs. Candidates must pass a SQL test and another on Oracle Database administration. Candidates can choose one of the following SQL exams:

  • Oracle Database 12c SQL (1Z0-071)
  • Oracle Database 12c: SQL Fundamentals (1Z0-061) NOTE: This test will be retired on November 30, 2019.

Candidates must also pass the Oracle Database 12c: Installation and Administration (1Z0-062) exam.

Oracle Certified Associate – Oracle Database 12cR2 Administrator

To earn the Oracle Database 12cR2 OCA credential, candidates must first earn either the Oracle Database SQL Certified Associate, Oracle Database 11g Administrator Certified Associate, or the Oracle Database 12c Administrator Certified Associate.  In addition, candidates are required to pass the Oracle Database 12cR2 Administration test (1Z0-072).

Oracle Certified Professional (OCP) – Oracle Database 12c Administrator

The OCP certification covers more advanced database skills. You must have the OCA Database 12c Administrator certification, complete the required training, submit a course submission form and pass the Oracle Database 12c: Advanced Administration (1Z0-063) exam.

Professionals who possess either the Oracle Database 11g Administrator Certified Professional or Oracle Database 12c Administrator Certified Professional credential may upgrade to the Oracle Database 12cR2 Administration Certified Professional credential by passing the Oracle DBA upgrade test (1Z0-074).

Oracle Certified Master (OCM) – Oracle Database 12c Administrator

To achieve OCM Database 12c Administrator certification, you must have the OCP Database 12c Administrator certification, complete two advanced courses, and pass the Oracle Database 12c Certified Master test (12cOCM), complete the course submission form, and submit the Fulfillment Kit request.

Oracle also offers the Oracle Database 12c Maximum Availability Certified Master certification, which requires three separate credentials, including the Oracle Database 12c Administrator Certified Master, Oracle Certified Expert, Oracle Database 12c-RAC and Grid Infrastructure Administration, and Oracle Certified Expert, Oracle Database 12c – Data Guard Administration.

Oracle Certified Expert (OCE) – Oracle Database 12c

The OCE Database 12c certifications include Maximum Availability, Data Guard Administrator, RAC and Grid Infrastructure Administrator, and Performance Management and Tuning credentials. All these certifications involve prerequisite certifications. Performance Management and Tuning takes the OSP Database 12c as a prerequisite and the Data Guard Administrator certification requires the OCP Database 12c credential. The RAC and Grid Infrastructure Administrator provides candidates the most flexibility, allowing candidates to choose from the OCP Database 11g, OCP Databases 12c, Oracle Certified Expert – Real Application Clusters 11g and Grid Infrastructure Administration.

Once the prerequisite credentials are earned, candidates can then achieve Data Guard Administrator, RAC and Grid Infrastructure Administrator or Performance Management and Tuning by passing one exam. Achieving OCP 12c plus the RAC and Grid Infrastructure Administration and Data Guard Administration certifications earns the Maximum Availability credential.

Oracle Database Certified Implementation Specialist

Oracle also offers three Certified Implementation Specialist credentials: the Oracle Real Application Clusters 12c, Oracle Database Performance and Tuning 2015, and Oracle Database 12c. Specialist credentials target individuals with a background in selling and implementing Oracle solutions. Each of these credentials requires candidates to pass a single test to earn the designation.

Oracle Industries certifications

Oracle Industries is another sizable category, with more than 25 individual certifications focused on Oracle software for the construction and engineering, communications, health sciences, insurance, tax and utilities industries. All these certifications recognize Certified Implementation specialists for the various Oracle industry products, which means they identify individuals proficient in implementing and selling industry-specific Oracle software.

Oracle Java and Middleware Certifications

The Java and Middleware certifications span several subcategories, such as Business Intelligence, Application Server, Cloud Application, Data Integration, Identity Management, Mobile, Java, Oracle Fusion Middleware Development Tools and more. Java and Middleware credentials represent all levels of the Oracle Certification Program – Associate, Professional and so on – and include Java Developer, Java Programmer, System Administrator, Architect and Implementation Specialist.

The highly popular Java category has certifications for Java SE (Standard Edition), and Java EE (Enterprise Edition) and Web Services. Several Java certifications that require a prior certification accept either the corresponding Sun or Oracle credential.

Oracle Operating Systems certifications

The Oracle Operating Systems certifications include Linux and Solaris. These certifications are geared toward administrators and implementation specialists.

The Linux 6 certifications include OCA and OCP Linux 6 System Administrator certifications, as well as an Oracle Linux Certified Implementation Specialist certification. The Linux 6 Specialist is geared to partners but is open to all candidates. Both the Linux OCA and Specialist credentials require a single exam. To achieve the OCP, candidates must first earn either the OCA Linux 5 or 6 System Administrator or OCA Linux Administrator (now retired) credential, plus pass an exam.

The Solaris 11 certifications include the OCA and OCP System Administrator certifications plus an Oracle Solaris 11 Installation and Configuration Certified Implementation Specialist certification. The OCA and OCP Solaris 11 System Administrator certifications identify Oracle Solaris 11 administrators who have a fundamental knowledge of and base-level skills with the UNIX operating system, commands, and utilities. As indicated by its name, the Implementation Specialist cert identifies intermediate-level implementation team members who install and configure Oracle Solaris 11.

Oracle Systems certifications

Oracle Systems certifications include Engineered Systems (Big Data Appliance, Exadata, Exalogic Elastic Cloud, Exalytics, and Private Cloud Appliance), Servers (Fujitsu and SPARC) and Storage (Oracle ZFS, Pillar Axiom, Tape Storage, Flash Storage System). Most of these certifications aim at individuals who sell and implement one of the specific solutions. The Exadata certification subcategory also includes Oracle Exadata X3, X4 and X5 Expert Administrator certifications for individuals who administer, configure, patch, and monitor the Oracle Exadata Database Machine platform.

Oracle Virtualization certifications

The Virtualization certifications cover Oracle Virtual Machine (VM) Server for X86. This credential is based on Oracle VM 3.0 for X86, and recognizes individuals who sell and implement Oracle VM solutions.

The Oracle VM 3.0 for x86 Certified Implementation Specialist Certification aim at intermediate-level team members proficient in installing OVM 3.0 Server and OVM 3.0 Manager components, discovering OVM Servers, configuring network and storage repositories and more.

The sheer breadth and depth of Oracle’s certification program creates ample opportunities for professionals who want to work with Oracle technologies, or who already do and want their skills recognized and validated. Although there are many specific Oracle products in which to specialize in varying capacities, the main job roles include administrators, architects, programmers/developers and implementation specialists.

Every company that runs Oracle Database, Oracle Cloud, or Oracle Linux or Solaris needs qualified administrators to deploy, maintain, monitor and troubleshoot these solutions. These same companies also need architects to plan and design solutions that meet business needs and are appropriate for the specific environments in which they’re deployed, indicating that the opportunities for career advancement in Oracle technologies are abundant.

Job listings and hiring data indicate that programmers and developers continue to be highly sought-after in the IT world. Programming and development skills are some of the most sought-after by hiring managers in 2019, and database administration isn’t far behind. A quick search on Indeed results in almost 12,000 hits for “Oracle developer,” which is a great indication of both need and opportunity. Not only do developers create and modify Oracle software, they often must know how to design software from the ground up, package products, import data, write scripts and develop reports.

And, of course, Oracle and its partners will always need implementation specialists to sell and deploy the company’s solutions. This role is typically responsible for tasks that must be successfully accomplished to get a solution up and running in a client’s environment, from creating a project plan and schedule, to configuring and customizing a system to match client specifications.

Oracle training and resources

It’s not surprising that Oracle has an extensive library of test preparation materials. Check the Oracle University website ( for hands-on instructor-led training, virtual courses, training on demand, test preparation seminars, practice exams and other training resources.

A candidate’s best bet, however, is to first choose a certification path and then follow the links on the Oracle website to the required exam(s). If training is recommended or additional resources are available for a particular exam, Oracle lists them on the test page.

Another great resource is the Oracle Learning Paths webpage, which provides a lengthy list of Oracle product-related job roles and their recommended courses.

Ed Tittel
Ed is a 30-year-plus veteran of the computing industry. He has worked as a programmer, technical manager, classroom instructor, network consultant and a technical evangelist for companies that include Burroughs, Schlumberger, Novell, IBM/Tivoli and NetQoS. He has written for numerous publications, including Tom’s IT Pro, and is the author of more than 140 computing books on information security, web markup languages and development tools, and Windows operating systems.

Earl Follis
Earl is also a 30-year veteran of the computer industry, who worked in IT training, marketing, technical evangelism, and market analysis in the areas of networking and systems technology and management. Ed and Earl met in the late 1980s when Ed hired Earl as a trainer at an Austin-area networking company that’s now part of HP. The two of them have written numerous books together on NetWare, Windows Server and other topics. Earl is also a regular writer for the computer trade press with many e-books, white papers and articles to his credit.

Tue, 28 Jun 2022 12:00:00 -0500 en text/html
Killexams : Blockchain Needs to Answer Four Major Questions

Speaking as part of a panel, “Blockchain: The Connectivity Cure” during the Automobility LA conference and expo, Naghmana Majed, Automotive and A&D Solutions Leader at IBM, offered some sobering commentary on the state of Blockchain.

“I see a lot of hype with Blockchain,” Majed said. “[But] we have to make some conscious decisions and ask, is this a good use case.”

Majed's comments went to the core of what Blockchain technology as a whole has been grappling with—even beyond the automotive space. With the value of Bitcoin no longer skyrocketing, Blockchain is still looking for a killer use case. And while there is plenty of excitement about applying the distributed ledger technology beyond cryptocurrency and into enterprise and commercial applications, the Automobility LA panel noted some significant challenges ahead.

What's the Business Model?

At the end of the day, companies have to make a profit, and any new technology needs to facilitate this in some way.

“What's the business model?” Majed asked. “If you can do the same thing in a more expensive way, does that make sense? Is there a clear business model in the domain of new businesses and opportunities?…We do a lot of work in supply chain. But that's cost cutting. We have to look at automotive and ask if we can provide a real business model and new revenue opportunities. Technology for the sake of technology does nothing.”

Fellow panelist Rahul Sonnad, co-founder and CEO of Tesloop (developer of an open-source software platform for connected vehicle sharing), echoed Majed's comments. “You don't need blockchain for mobility [applications]," he said. "I don't see a business model [blockchain] enables that you can't do without it.”

Does It Help with Trust?

Chris Ballinger, CEO and co-founder of the Mobility Open Blockchain Initiative (MOBI)—a consortium of automakers and tech companies collaborating to apply Blockchain to the automotive industry—told the audience that he believed the biggest offering for Blockchain was the trust provided by the additional layers of encryption and security afforded by Blockchain's decentralized structure. “Centralization works fine as long as you trust the person in the middle,” Ballinger said. “Blockchain is cheaper, but also in a way that can be trusted.”

Playing Devil's Advocate, Sonnad questioned whether additional trust is really such a significant value proposition. “Businesses have been trusting each other for thousands of years. That's not the fundamental problem with mobility.”

But Ballinger countered that the increasing number of mobile devices, as the Internet of Things (IoT) moves toward one trillion connected devices, is creating a need for new levels of automated trust and accountability. He noted that one of the original use cases for Blockchain's smart contracts, proposed as far back as a 1996 research paper, was the automatic transfer of an auto title or lease.

Ballinger pointed to the infamous Nigerian Prince email scams as an example of this need for trust. More and more connected devices (and vehicles) means more and more opportunities for malicious parties to try and infiltrate or hack these machines. There's a chance a real Nigerian Prince could be trying to offer you large sums of money, he said, but you still need to be able to recognize a fraud.

“Blockchain can bring a lot to the table in terms of all the connected devices,” Ballinger said. “In a machine-to-machine economy, it's not going to take us long to get to a trillion connected devices. How will those devices know the Nigerian Prince from a scammer?”

Where Are the Standards?

One issue the panel did agree on was the need for standards around Blockchain. “The challenge of Blockchain is that everyone in the value chain has to participate,” Rick Gruehagen, CTO of Spireon (a provider of connected vehicle and fleet tracking technologies), told the audience. “Everyone has to embrace it if it's going to work.”

Both Gruehagen and Ballinger discussed the potential of Blockchain in supply chain and fleet management for this very reason. In the fleet industry, Gruehagen said, everything revolves around a chain of custody around the shipment of goods and all of the inherent transfers that happen, making it an ideal use case for Blockchain.

“One reason supply chain is such a good application [for Blockchain] is because of how you can get everyone involved,” Ballinger added, noting that a big part of MOBI's overall mission is developing application layer standards for OEMs to share.

Tesloop's Sonnad later took things a step farther, encouraging companies to consider open source as a means of easing the path of Blockchain adoption for developers. “I think open standards and source code is the right place to start,” he said. “[Imagine] if you go forward 10 years and Blockchain is as easy to develop on as, say, .Net or Linux.” For Sonnad, creating a Blockchain that is standardized and also cheap, easy, and fast to develop on is key.

Where Do We Start?

Where, then, are the best applications for Blockchain right now? All of the panelists encouraged developers to look at simpler use cases that can lead into broader applications first. Sonnad joked that his company calls its application of Blockchain the “Mockchain” because of this. “It's the smallest use possible of the Blockchain," he said. “It's a little bit valuable, but it's very cheap and it sets the stage for broader deployment.”

“I wouldn't tell anyone to sit on the sidelines, but I would also recommend that people take a bit of a back step and start with a case that will supply you immediate business value," IBM's Majed said. She encouraged OEMs to look at three key factors: technology, scalability, and performance. “Technology is evolving," she said. “Start small and with something that will supply you a quick return on value and build your own expertise and capabilities and expand on it...For OEMs, finance is a good area to start with...Test it out to see how it works.”

She also discussed work that IBM has been doing in the mobility space. One project uses Blockchain to transform a connected vehicle into a sort of car wallet, in the same way Apple Pay and Android Pay transform smartphones for the same purpose. The idea, she explained, is to enable mobile payments for transactions like toll roads and to also facilitate cars being used more as a platform.

“How can we enable that kind of mobility in car sharing, where the car becomes a platform where other providers can provide a service?” she asked. Blockchain could also be used to provide secure and trusted over the air software updates to vehicles. But Majed cautioned that for all of this to happen, “the scalability and performance needs to catch up to have the low latency and speed needed for connected vehicles.”

All panelists agreed that a trust-less, decentralized economy is coming. But ultimately, the panelists said, it will be up to OEMs to decide to what degree this will take effect. It will all depend on companies' need for trust and compliance, their business network, and, of course, the value proposition. “If you want to create something out there and no one owns it or controls it, Blockchain is the best way,” Sonnad said.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

Thu, 26 May 2022 12:00:00 -0500 en text/html
Killexams : A School Just for You

Every day in America, as many as 8,300 high school students drop out of school.

After that, things typically go even further downhill. A high-school dropout is out of the running for 90 percent of U.S. jobs and will cost taxpayers nearly $300,000 each over a lifetime, according to a study by Northeastern University. 

Without that diploma, dropouts are more than twice as likely as college graduates to live in poverty – and 63 times more likely to go to jail.

What if technology could Strengthen these grim statistics? What if the right data could help teachers intervene to prevent kids from dropping out?

The clock is ticking for America’s workforce: By 2020, nearly 6 million high school dropouts will go without work, predicts management consulting firm McKinsey and Company.  On the other side of the coin, there will be a shortage of 1.5 million college graduates to fill highly skilled jobs.

Businesses are so aware of America’s growing skills gap that they factor in the quality of local schools when deciding where to set up shop. They need assurance that the educational system will produce future employees with the skills to keep pace with rapid technological innovation.

If schools don’t produce hirable graduates and business districts don’t grow, then jobs, taxes, and other investments in local communities are jeopardized. “Education is a key pillar of any economic development strategy,” said Riz Khaliq, marketing and communications director for IBM Global Public Sector and Smarter Cities.

A One-Size-Fits-All Tradition

The good news is, as technology changes the skills required for future jobs, it’s also changing the way students are educated.

Traditionally, education has been a one-size-fits-all model with students sharing the same curriculum, classes, teachers, and books. While many teachers differentiate learning for students, large class sizes and a lack of resources often limit how much they can do.

In the age of the Internet, cloud computing, and mobile devices, however, more data exists to help teachers and administrators address individual student needs than ever before.

Schools are collecting volumes of student data from many sources, including grades, test scores, digital learning profiles, behavior reports, attendance records, and demographic data

“Our schools are swimming in data,” said Bruce Gardner, North America Education Director for IBM. “These days we just need to help schools organize it and direct it in a way for it to Strengthen instruction, and ultimately Strengthen outcomes.”

Harnessed correctly, insights from that data will lead to the dawn of a new era of education: a personalized learning experience for students.

“It’s about truly understanding individual student requirements, truly understanding the resources that are available to address those requirements, and then using data and analytics to align those two things,” Gardner said.

IBM’s analytics can also integrate that data to provide a more holistic picture of how each student learns. When combined with a database of curriculum materials, best teacher practices, and outside education resources, the technology can also predict which students are at risk — and recommend solutions, said Gardner.

IBM included education on its 2013 list of five innovations that would change people’s lives over five years. In a future where machines can learn, reason, and engage with people, classrooms will actually learn about students, the company predicts.

By increasing student engagement and enhancing teacher effectiveness, personalized learning has the potential to Strengthen academic performance and reduce those troubling dropout rates.

That doesn’t mean that technology will replace teachers or the human insights that are so critical to understanding students’ needs. “What technology can do is make the process easier for the teacher so that the time constraints and the data constraints are not inhibitors for learning,” Gardner explained.

To understand the potential impact of technology to transform education, one only has to look at the improvements that big data and analytics have created in the field of healthcare. Even the best emergency room doctors have limited solutions without a complete patient history. When comprehensive patient histories were provided to doctors at the point of care, however, patient outcomes improved dramatically.

“We’re now applying that same philosophy to education,” said Khaliq, where comprehensive student profiles could Strengthen learning outcomes in the same way that patient data improved health outcomes.

Georgia’s Gwinnett County Public Schools (GCPS) is partnering with IBM to put that theory to the test, and provide personalized learning to its 170,000 K-12 students.

The district’s eCLASS project uses analytics to enable teachers to identify both at-risk students and high performers, said Steven Flynt, chief strategy and performance officer for GCPS. In the process, GCPS hopes to not only Strengthen student outcomes, but also to attract more business investment to the county.

With IBM’s latest analytics, GCPS is also starting to use prescriptive data that can not only indicate how students are doing, and predict what they might be able to achieve, but also recommend solutions based on what worked in other cases, Flynt added.

“Being able to quickly recall what has helped in the past can provide teachers with valuable tools to solve future problems,” Flynt said.

But personalized learning isn’t just for K-12 students. Higher education can benefit from technology that enables students to leverage online learning to get the courses they need to graduate on time.

IBM’s analytics can also help align students with their prospective career pathways. Australia’s Deakin University, for example, is using IBM’s Watson technology to create a Student Advisor application to supply students real-time answers to school-related questions.

Eventually, students will be able to get personalized responses, as well as recommendations on career paths, job prospects, and alternative routes through their degree programs, wrote Simon Eassom, Global Manager of Education Solutions for IBM Smarter Cities, in a exact blog.

Leveling the Educational Playing Field

Perhaps most important, the cloud-based nature of IBM’s technologies gives all schools the potential for personalized learning, and can help level the playing field for education in every county and state, said Khaliq.

“The data that is available today is an important natural resource for the next century,” he said. “And education systems that leverage that data are going to be more competitive in the global economy.”

Thu, 16 Jul 2015 03:13:00 -0500 text/html
Killexams : Search IBM Courses No result found, try new keyword!Once enrolled you can access the license in the Resources area <<< This course, Advanced Machine Learning and Signal Processing, is part of the IBM Advanced ... customer questions, you've got ... Thu, 22 Apr 2021 07:23:00 -0500 text/html Killexams : How to measure computer performance

With a car, it's easy enough to know that going 90 mph is faster than 85.

But judging computer performance is a lot more complicated. From MIPS to SPEC to Viewperfs, the industry has developed a plethora of benchmarks aimed at answering what seems like a simple-enough question: Which computer systems offer the best performance?

"It is a major ordeal to try to evaluate hardware platforms," says John Kundrat, manager of business partner relations at Structural Dynamic Research Corp. "If it's so difficult for us who do this day in and day out, imagine how hard it is for users."

Why is it so hard to pin down computer speed? Different tasks put different strains on a system, so being fastest at displaying and ro- tating graphics doesn't necessarily mean a machine is equally adept at finite-element-analysis number crunching. That's one reason why experts caution that a single benchmark result is not enough to rate a computer's performance, and users should look at a variety of test results before drawing conclusions about how different machines stack up.

Raw speed. One popular benchmark suite, from the Standard Performance Evaluation Corp. (SPEC), measures a computer's central processing unit (CPU). But even this--which doesn't take into account graphics display, or how fast data can be pulled off a hard disk for use--is fairly complicated.

One set of SPEC tests, SPECint95, looks at the CPU's integer performance, or how it handles simple operations. Another group of benchmarks, SPECfp95, examines floating-point performance, or how fast the chip does more complex math.

Results generally show up in news reports, if at all, as two single numbers. But to get truly useful information from SPEC, it's important to look at the individual tests comprising both integer and floating point, says SPEC President Kaivalya Dixit.

"People shouldn't be comparing one number," he advises. "SPECfp can vary 4 or 5 to 1." For example, a given workstation might be five times as fast as another computer on the "tomcatv" fluid-dynamics test, but only twice as fast on a different analysis test, he says. And, such huge variations are typical.

He suggests engineers look at the numbers closest to their everday tasks. For example, most would care less about SPEC's weather-prediction component and more about the 101.tomcatv mesh-generation program (one of 10 component pieces of the SPECfp95 number.) Other tests in the SPECfp95 suite include 104.hydro2d, with hydrodynamical Navier Stokes equations; and 110.aplu, parabolic/elliptic partial differential equations.

SPEC recently updated its test suite from 1992 to '95 versions, in order to keep pace with rapidly advancing technology. "SPEC92 is dead," Dixit notes. "If you use it, you will get wrong information." One reason: computer systems became so much more powerful in the past three years, the old benchmarks could sit comfortably inside a system's cache (on-chip memory), thus not accurately putting a processor through its paces.

The '95 SPECs use real ap- plications, and not exercises dreamed up in a lab, Walter Bays, senior staff engineer at Sun, notes. "It's a big improvement."

But others in the industry say the numbers are of limited use. "SPECmarks and PLBs (Picture-Level Benchmarks) don't do a very good job conveying how the system will work in real-world applications," maintainss Ty Rabe, manager of CAD application marketing at Digital Equipment Corp. "Graphics benchmarks are more useful, but few people understand their mix of graphics."

Graphics performance. The Graphics Performance Committee (GPC), recently merged with SPEC, aims to measure how fast computer systems can run graphics-intensive applications.

"GPC probably has the best standard benchmarks within the industry," says Bjorn Andersson, senior product manager, Ultra1 desktop, at Sun Microsystems Computer Corp. "They supply you quite a good indication within different data sets. ... But you have to be careful which numbers you're looking at."

GPC numbers have been published in an inch-thick volume that can be difficult for anyone outside the computer industry to plow through. "When are they going to get a comprehensible number?" one industry analyst asked. "The reports are impossible to understand."

"It is a little daunting," admits Mike Bailey, chair of the GPC Group. "We prefer to think of it as complete."

Such an array of test results is reported so that users can look at performance on the specific tasks they're likely to do, he says. "Vectors per second or polygons per second are not terribly meaningful," he notes, because vectors can be many different sizes; arbitrarily generating a number using 10-pixel lines is unlikely to duplicate anyone's real-world experience.

"Users can have a lot of faith in the numbers because vendors are being policed by all the other vendors," Bailey says. "It makes for some long meetings." GPC, a non-profit organization, consists of manufacturers, users, consultants, and researchers.

PLBs (Picture-Level Benchmarks) are test models that are rotated, panned, zoomed, and so on, for a would-be purchaser to time how long such tasks take on different platforms. Two catch-all numbers, PLBwire93 and PLBsurf, supply combined results for tests in wireframe and surface models, respectively. However, as with SPECmarks, users can pick the specific test models most likely to reflect their genuine work. For engineers, that could include sys_chassis and race_car in wireframe, and cyl_head in surface.

Each hardware vendor can write the software used to perform rotating, panning, and other tasks on each model--leading critics to complain that the benchmarking codes are more finely tuned than an off-the-shelf software package is ever likely to be. "We think PLB is a dubious benchmark," says John Spitzer, manager of desktop performance engineering at Silicon Graphics.

However, this can supply a look at a system's potential, if software programmers take advantage of a computer's specific features.

Viewperfs are the first Open-GL performance benchmark endorsed by the GPC's OpenGL Performance Characterization (OPC) subcommittee. Developed by IBM, they test performance (using frames per second) on specific data sets running in genuine software packages. So far, there are standard "viewsets" for Parametric Technology's Pro/CDRS industrial-design software, IBM's Data Explorer visualization, and Intergraph's DesignReview 3-D model review package. The committee is now looking to get other software vendors to contribute test suites.

Yet another benchmark, STREAM, was developed at the University of Delaware to measure sustained memory bandwidth. It consists of four tests with long vector operations, designed to stress memory access--key point of system performance for applications that place such demands on a computer.

Standards alternatives. Most hardware and software companies have their own benchmarks as well. To many vendors, this is the only way they can fully test the capabilities of their own systems. However, it is difficult for buyers to know which such numbers are trustworthy.

"I've seen so many benchmarks," says Kundrat at SDRC. "They can be structured to do a lot of things."

"They are tremendously easy to abuse," notes Rabe at DEC. However, such internal benchmarks can be useful for company engineers, who may spot specific performance problems and redesign future systems accordingly. "They allowed us to make substantial improvement on Alpha systems," he says. Or, proprietary tests can help determine if a system has been optimized for important markets.

Many major companies develop their own benchmarks, based specifically on the work engineers plan to do with new computer systems--something many in the industry highly recommend as the best way to test how well a computer will perform the tasks it would be assigned. Ford, for example, reportedly has a suite of 15 different applications running on multiple platforms. And at Eastman Kodak, staff engineer Rex Hays used some internally generated code from genuine work in progress to see how much faster Sun's new UltraSPARC ran vs. the older SPARCstation 20. Results varied from a two- to five-fold increase, depending on the task, he says.

Time spent running such tests, if they are to be useful, is considerable; one test alone ran for more than 70 hours. "It's a tedious process," according to Kundrat at SDRC. And, during the weeks or months of evaluating systems, new technology can come out making the older systems obsolete.

Ultimately, most in the industry agree, benchmarks can be useful as a guide to expected performance but not an exact prognosticator. "It's kind of like the EPA mileage estimate," Rabe at Digital concludes. "Your mileage may vary."

Glossary of Terms:

Sorting out the Benchmarks

  • GPC--Graphics Performance Characterization Committee. This non-profit organization of vendors, users, analysts, and researchers includes several subgroups: XPC, measuring XWindow performance (such as 2-D lines and 2-D polygons); OPC, the Open GL Performance Characterization group, testing implementations of Open-GL graphics routines; and Picture-Level Benchmarking, where vendors can devise their own ways of describing various standard graphics scenes and then measure performance on their implementations.

  • MFLOPS--million floating-point operations per second. A less popular benchmark than in the past, as more sophisticated tests measuring real-world applications come into favor. MIPS--million instructions per second. Raw measurement of how many simple instructions a computer chip can process. Often criticized for providing little useful real-world data.

  • PLB--Picture Level Benchmark, from the Graphics Performance Characterization (GPC) Committee. Features a number of different models, and then two categories of results for surface (PLBsurf93) and wireframe (PLBwire93). Critics say that because each vendor gets to write its own code for PLB tests, the numbers tend to be much more finely tuned than real-world software is to take advantage of specific chip capabilities.

  • SPEC--Standard Performance Evaluation Corp., a non-profit group of major hardware vendors who jointly develop benchmark tests. SPEC tests measure central-processing-unit speeds and not graphics. SPEC recently came out with new benchmarks, SPEC95, to replace the '92 test suite. SPECint95, which measures a processor's integer performance, can be useful for looking at how a system might handle 2-D, wireframe CAD. SPECfp95 is recommended for more demanding computing; it measures a processor's floating-point speed. However, for tasks such as 3-D solid modeling and visualization, users should run graphics benchmarks as well, since CPU performance alone will not accurately reflect performance. SPEC results are available on the World Wide Web at

  • Viewperf--'Stand-alone' benchmark model from the Graphics Performance Committee. A user can feed the model in, and it spins, thus measuring system performance. There are seven tests for the Pro/CDRS industrial design (Parametric Technology Corp.) "viewset," 10 for Data Explorer visualization (IBM), and 10 for Design Review (Intergraph). Others are under development.

Proprietary tests can also yield useful results

Along with industry-standard benchmarks, companies develop their own test suites in order to measure computer performance. Hewlett-Packard and Structural Dynamics Research Corp. agreed to share the results of one such proprietary test suite with Design News.

This Hedgetrimmer benchmark is a suite of 13 tests using I-DEAS Master SeriesRelease 3 design, drafting, and simulation modules. It runs a 20-MByte model file of an electric hedgetrimmer through various simulations to mimic common computing situations. Such tests are aimed at helping both workstation designers and potential buyers see how much additional performance they might expect to receive, as they move up the product line.

The detailed steps:

1. Build the hedgetrimmer assembly from various parts (blade, housing, switch, etc.)
2. Save the resulting assembly to a new model file
3. Explode assembly into individual parts
4. Reassemble assembly from the parts
5. Shade the hedgetrimmer using hardware shading
6. Display hidden-line (wireframe) view of the hedgetrimmer
7. Display hedgetrimmer using ray tracing (CPU intensive)
8. Move to Drafting Module and shade top and front views
9. Display hidden-line view in Drafting mode
10. Set up assembly for Drafting mode
11. Enter Simulation Module and mesh the hedgetrimmer blade
12. Restrain the blade and perform analysis (complex FEA)
13. Display analysis results graphically

Tips from the experts

Using the right numbers is only part of the story when measuring computer performance, according to industry experts. Here are some other tips:

Don't use a single benchmark to try to rate computer performance.

  • Once you've narrowed down the choices, test your own specific applications on several different platforms. If you don't have the resources to develop testing software in-house, you can check with your software vendor or an outside consultant.

  • Factors other than speed and system performance are also important in making a purchase decision: available application software, vendor reliability, upgradability, and support services.

Wed, 06 Jul 2022 12:00:00 -0500 en text/html
Killexams : The Best Data Analytics Certifications For Your Next Career Move

Pricing from: $12.42 per month

DataCamp is a one-stop-shop for data analytics professionals to get the right skills and certifications for their field. It has more than 380 courses designed to meet the needs of a data scientist, data engineer, statistician, programmer and data analyst―to name a few. If you already have a head start on your career and are looking for specific skills, DataCamp offers courses tailored to what you are looking for, including structured query language (SQL) fundamentals, applied finance, machine learning (ML) and data visualization.

Test the platform out with its limited free access. This gives you the first chapter of every course for free. You can also choose to take up to six free courses and gain access to the job board with a free professional profile on the site. Most people choose the Premium plan for $12.42 per month, which allows you full access to the entire library of certifications.

To get certified, you must pass a skills test. The test is timed and will go over general skills. You may have a coding challenge as part of the test or be required to make a case study submission.

After certification, you can create a professional profile and gain career help from the career services team. This team guides professionals through a job search and provides interview prep for upcoming meetings.

Thu, 23 Jun 2022 23:07:00 -0500 Kimberlee Leonard en-US text/html
Killexams : Emulating The IBM PC On An ESP32

The IBM PC spawned the basic architecture that grew into the dominant Wintel platform we know today. Once heavy, cumbersome and power thirsty, it’s a machine that you can now emulate on a single board with a cheap commodity microcontroller. That’s thanks to work from [Fabrizio Di Vittorio], who has shared a how-to on Youtube. 

The full playlist is quite something to watch, showing off a huge number of old-school PC applications and games running on the platform. There’s QBASIC, FreeDOS, Windows 3.0, and yes, of course, Flight Simulator. The latter game was actually considered somewhat of a de facto standard for PC compatibility in the 1980s, so the fact that the ESP32 can run it with [Fabrizio’s] code suggests he’s done well.

It’s amazingly complete, with the ESP32 handling everything from audio and video to sound output and keyboard and mouse inputs. It’s a testament to the capability of modern microcontrollers that this is such a simple feat in 2021.

We’ve seen the ESP32 emulate 8-bit gaming systems before, too. If you remember [Fabrizio’s] name, it’s probably from his excellent FabGL library. Videos after the break.

Fri, 15 Jul 2022 12:00:00 -0500 Lewin Day en-US text/html
Killexams : Microsoft issues $25 price hike for certification exams

If you're planning to take any Microsoft certification exams, now is the time to act because Microsoft will raise the price for each test by $25 beginning July 1.

Prices vary by country, but Microsoft's price lookup tool reveals a current test price in the United States of $125, and a price after July 1 of $150. There are lower prices for current students at high schools and colleges: $60 now and $83 after July 1.

WORTH IT? Microsoft certifications won't boost your pay much

The new price affects exams for nine types of certifications: Microsoft Certified Technology Specialist; Certified IT Professional: Certified Professional Developer; Certified Desktop Support Technician; Certified Systems Administrator; Certified Systems Engineer; Certified Application Developer; Certified Solution Developer; and Certified Database Administrator.

If you're planning to get certified in multiple Microsoft technologies, the price could add up quickly. For example, there are dozens of certifications that fall under the category of Microsoft Certified Technology Specialists (MCTS), and in some cases multiple certifications for the same piece of software. There are three MCTS certs for Windows Server 2008, and another for Windows Server virtualization.

But not all certification exams will get more expensive. Microsoft said it "does not anticipate" raising the price for the Microsoft Certified Master, Certified Architect, Technology Associate, or Office Specialist exams.

Copyright © 2011 IDG Communications, Inc.

Thu, 30 Jun 2022 11:59:00 -0500 Jon Brodkin en text/html
C2180-278 exam dump and training guide direct download
Training Exams List