C9010-262 practice exam are daily updated at killexams.com

Rather than squandering energy on one C9010-262 digital book that contains obsolete inquiries, register at killexams.com and neglect to stress over refreshed C9010-262 questions. We deal with it for you. Our group ceaselessly works for updates, substantial, and most recent C9010-262 exam prep that are gotten from C9010-262 practice test.

Exam Code: C9010-262 Practice test 2022 by Killexams.com team
C9010-262 IBM Power Systems with POWER8 Enterprise Technical Sales Skills V2

Exam Title : IBM Certified Technical Sales Specialist - Power Systems with POWER8 Enterprise V2
Exam ID : C9010-262
Exam Duration : 120 mins
Questions in test : 60
Passing Score : 38 / 60
Exam Center : Pearson VUE
Real Questions : IBM Power Systems with POWER8 Enterprise Technical Sales Skills Real Questions
VCE VCE test : IBM C9010-262 Certification VCE Practice Test

- Identify the advantages of POWER8 processor technology vs x86.
- Compare and contrast the value proposition of Power Systems solutions (e.g., SAP HANA) with competitive solutions (e.g., x86, Dell, HP, Oracle, etc.).
- Position the advantages of PowerVM virtualization solutions relative to competition (e.g., Oracle, HP, Hyper-V, VMware). 10%
Design Solution to Customer Requirements
- Given customer requirements, design an appropriate HMC solution, including Enterprise Pools, remote restart, performance and capacity management, virtualization management/co-management, redundant FSPs, multiple HMCs, and secure networking.
- Incorporate appropriate existing hardware in a new configuration (e.g., disk drive migration, etc.).
- Discuss the options that are available for attaching SAS storage to Power Systems enterprise servers.
- Given a customer's requirements, describe the benefits of Integrated Facility for Linux.
- Recognize that enterprise servers can be designated for CBU.
- Describe the role of the Technical and Delivery Assessment (TDA) and how it is used to meet customer requirements.
- Identify terms and conditions when implementing CoD (Capacity on Demand) offerings (e.g., CUoD, Utility, Elastic, Mobile Activations, Trial).
- Identify key IBM cloud management offerings available for Power Systems (e.g. PowerVM NovaLink, PowerVC, IBM Cloud Orchestrator) and the benefits that are associated with having a cloud infrastructure based on Power enterprise servers (e.g. configuration flexibility, Dynamic Resource Optimization, Capacity on Demand).
- Utilize knowledge of key IBM cloud management offerings to address Power Systems customers' business imperatives for private and/or hybrid cloud.
- Position key IBM analytics solutions on enterprise Power Systems (e.g., DB2 BLU, Cognos, SPSS) relative to scalability over scale-out models.
- Identify IBM's heterogeneous computing solution offerings (e.g., solutions utilizing CAPI, GPUs, FPGAs).
- Identify technical advantages that POWER8 enterprise servers provide for Linux workloads in terms of performance, price/performance, and workload consolidation.
- Identify the processes that are possible to install and maintain hardware and software (e.g., firmware, FLRT, etc.).
- Identify the benefits of PowerCare for enterprise servers.
- Given a client's availability requirements (RTO/RPO), design an appropriate solution for business continuity and/or disaster recovery (e.g., PowerHA, backup/restore). 35%
Power Systems Architecture and Product Family
- Position why enterprise models may be preferred to Scale-out models in order to satisfy customer needs.
- Describe the Power Systems enterprise servers' features and functionality.
- Recognize POWER8 compatibility with prior generations of Power Systems when planning to upgrade or migrate a customer's installed hardware and/or software.
- Describe reliability, availability and serviceability (RAS) features of the Power Systems product family, especially those features that are exclusive to enterprise-class models.
- Identify Capacity on Demand features and benefits and describe when each is appropriate, including CUoD, Elastic, Utility and Trial.
- Describe Enterprise Pools prerequisites, capabilities and benefits.
- Design solutions with expansion drawers, adapter cards, traditional disk, Solid State Drives (SSDs), and attached SAN and tape into a POWER8 solution.
- Describe the benefits of POWER8 processor architecture, including SMT, L4 cache, balanced performance, clock speed, EnergyScale, memory bandwidth, I/O bandwidth (PCIe Gen3), CAPI, etc. relative to prior Power Systems and competition.
- Identify rack best practices for enterprise servers (e.g., I/O drawer placement, spacing, and cabling implications, "deracking", horizontal PDUs, IBM manufacturing options, etc.). 37%
Virtualization and Cloud
- Recognize when to configure physical I/O, virtual I/O or a combination of both.
- Match hardware connectivity (Fibre Channel, SAS, iSCSI) with the types of virtualization (NPIV, vSCSI, SR-IOV, SEA).
- Determine when shared storage pools are appropriate for an enterprise scenario.
- Given a scenario, apply the capabilities of Live Partition Mobility.
- Given a scenario, apply the resource sharing capabilities of Power Systems, including processors, memory, and I/O.
- Given a business need and workload, design an appropriate virtualization solution (including how to virtualize).
- Design an appropriate virtualization system management solution (consider: HMC, PowerVC, PowerVM NovaLink). 18%

IBM Power Systems with POWER8 Enterprise Technical Sales Skills V2
IBM Enterprise Practice Test
Killexams : IBM Enterprise VCE test - BingNews https://killexams.com/pass4sure/exam-detail/C9010-262 Search results Killexams : IBM Enterprise VCE test - BingNews https://killexams.com/pass4sure/exam-detail/C9010-262 https://killexams.com/exam_list/IBM Killexams : Cybersecurity - what’s the real cost? Ask IBM

Cybersecurity has always been a concern for every type of organization. Even in normal times, a major breach is more than just the data economy’s equivalent of a ram-raid on Fort Knox; it has knock-on effects on trust, reputation, confidence, and the viability of some technologies. This is what IBM calls the “haunting effect”.

A successful attack breeds more, of course, both on the same organization again, and on others in similar businesses, or in those that use the same compromised systems. The unspoken effect of this is rising costs for everyone, as all enterprises are forced to spend money and time on checking if they have been affected too.

But in our new world of COVID-19, disrupted economies, climate change, remote working, soaring inflation, and looming recession, all such effects are all amplified. Throw in a war that’s hammering on Europe’s door (with political echoes across the Middle East and Asia) and it’s a wonder any of us can get out of bed in the morning.

So, what are the real costs of a successful cyberattack – not just hacks, viruses, and Trojans, but also phishing, ransomware, and concerted campaigns against supply chains and code repositories?

According to IBM’s latest annual survey, breach costs have risen by an unlucky 13% over the past two years, as attackers, which include hostile states, have probed the systemic and operational weaknesses exposed by the pandemic.

The global average cost of a data breach has reached an all-time high of $4.35 million – at least, among the 550 organizations surveyed by the Ponemon Institute for IBM Security (over a year from March 2021). Indeed, IBM goes so far as to claim that breaches may be contributing to the rising costs of goods and services. The survey states:

Sixty percent of studied organizations raised their product or services prices due to the breach, when the cost of goods is already soaring worldwide amid inflation and supply chain issues.

Incidents are also “haunting” organizations, says the company, with 83% having experienced more than one data breach, and with 50% of costs occurring more than a year after the successful attack.

Cloud maturity is a key factor, adds the report:

Forty-three percent of studied organizations are in the early stages [of cloud adoption] or have not started applying security practices across their cloud environments, observing over $660,000 in higher breach costs, on average, than studied organizations with mature security across their cloud environments.

Forty-five percent of respondents run a hybrid cloud infrastructure. This leads to lower average breach costs than among those operating a public- or private-cloud model: $3.8 million versus $5.02 million (public) and $4.24 million (private).

That said, those are still significant costs, and may suggest that complexity is what deters attackers, rather than having a single target to hit. Nonetheless, hybrid cloud adopters are able to identify and contain data breaches 15 days faster on average, says the report.

However, with 277 days being the average time lag – an extraordinary figure – the real lesson may be that today’s enterprise systems are adept at hiding security breaches, which may appear as normal network traffic. Forty-five percent of breaches occurred in the cloud, says the report, so it is clearly imperative to get on top of security in that domain.

IBM then makes the following bold claim :

Participating organizations fully deploying security AI and automation incurred $3.05 million less on average in breach costs compared to studied organizations that have not deployed the technology – the biggest cost saver observed in the study.

Whether this finding will stand for long as attackers explore new ways to breach automated and/or AI-based systems – and perhaps automate attacks of their own invisibly – remains to be seen. Compromised digital employee, anyone?

Global systems at risk

But perhaps the most telling finding is that cybersecurity has a political dimension – beyond the obvious one of Russian, Chinese, North Korean, or Iranian state incursions, of course.

Concerns over critical infrastructure and global supply chains are rising, with threat actors seeking to disrupt global systems that include financial services, industrial, transportation, and healthcare companies, among others.

A year ago in the US, the Biden administration issued an Executive Order on cybersecurity that focused on the urgent need for zero-trust systems. Despite this, only 21% of critical infrastructure organizations have so far adopted a zero-trust security model, according to the report. It states:

Almost 80% of the critical infrastructure organizations studied don’t adopt zero-trust strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared to those that do. All while 28% of breaches among these organizations were ransomware or destructive attacks.

Add to that, 17% of breaches at critical infrastructure organizations were caused due to a business partner being initially compromised, highlighting the security risks that over-trusting environments pose.

That aside, one of the big stories over the past couple of years has been the rise of ransomware: malicious code that locks up data, enterprise systems, or individual computers, forcing users to pay a ransom to (they hope) retrieve their systems or data.

But according to IBM, there are no obvious winners or losers in this insidious practice. The report adds:

Businesses that paid threat actors’ ransom demands saw $610,000 less in average breach costs compared to those that chose not to pay – not including the ransom amount paid.

However, when accounting for the average ransom payment – which according to Sophos reached $812,000 in 2021 – businesses that opt to pay the ransom could net higher total costs, all while inadvertently funding future ransomware attacks.”

The persistence of ransomware is fuelled by what IBM calls the “industrialization of cybercrime”.

The risk profile is also changing. Ransomware attack times show a massive drop of 94% over the past three years, from over two months to just under four days. Good news? Not at all, says the report, as the attacks may be higher impact, with more immediate consequences (such as destroyed data, or private data being made public on hacker forums).

My take

The key lesson in cybersecurity today is that all of us are both upstream and downstream from partners, suppliers, and customers in today’s extended enterprises. We are also at the mercy of reused but compromised code from trusted repositories, and even sometimes from hardware that has been compromised at source.

So, what is the answer? Businesses should ensure that their incident responses are tested rigorously and frequently in advance – along with using red-, blue-, or purple-team approaches (thinking like a hacker, a defender, or both).

Regrettably, IBM says that 37% of organizations that have IR plans in place fail to test them regularly. To paraphrase Spinal Tap, you can’t code for stupid.

Wed, 27 Jul 2022 12:00:00 -0500 BRAINSUM en text/html https://diginomica.com/cybersecurity-whats-real-cost-ask-ibm
Killexams : Security and Vulnerability Management Market 2022: Size and Value estimated to Reach CAGR of 10% | Latest Trend, Competitors Analysis 2027

The Global Security and Vulnerability Management Market forecast report provides strategically important competitor information, analysis, and insights to formulate effective RandD strategies. The report also reviews key companies involved in Security and Vulnerability Management and enlists all their major and minor projects.

The “Security and Vulnerability Management Market” expected to grow considerably in the forecast period 2022- 2027. The Research report sheds light on Leading Players details with best facts and figures, meaning, definition, SWOT analysis, expert opinions and growth of Security and Vulnerability Management industry in upcoming years. Leading companies of Security and Vulnerability Management Market are: IBM Corporation, Qualys Inc., Hewlett Packard Enterprise Company, Dell EMC, … The report provides key statistics on the market status of the leading Security and Vulnerability Management market Trends, share and opportunities in the market.

The security and vulnerability management market is expected to register a CAGR of 10%, over the forecast period (2021-2027). As the current cybersecurity threat landscape is uniformly evolving, organizations need to be proactive in their threat and vulnerability management efforts. The efficiency of vulnerability management depends on the organization’s ability to keep up with current security threats and trends.

Get a sample copy of the report at-https://www.marketreportsworld.com/enquiry/request-sample/13517535

Company Coverage: –

– IBM Corporation

– Qualys Inc.

– Hewlett Packard Enterprise Company

– Dell EMC

– Tripwire Inc.

– Symantec Corporation

– McAfee Inc.

– Micro Focus International PLC

– Rapid7 Inc.

– Fujitsu Limited

– Alien Vault Inc.

– Skybox Security Inc.

Get a sample Copy of the Security and Vulnerability Management Market Report 2022

Market Players Competitor Analysis:

– latest security attacks have increased the need for a robust cybersecurity management system, which is centered around a tough policy and applies many technologies to achieve defense in depth.

– Coupled with rapid growth in the number of cyber attacks, the demand for strict acquiescence and security packages to protect confidential data across different verticals, such as government, banking, retail, manufacturing, among others is increasing and is expected to drive the growth of the market over the forecast period.

– Data breaches result in increased costs for preventive measures to manage the impact of theft and loss of valuable customer information. Owing to such factor the market has garnered the demand due to increasing awareness among the users.

Scope of the Report

Security and vulnerability management is the practice of identifying, classifying, and mitigating vulnerabilities in networking software or hardware. It has become an integral part of an enterprise’s security in latest years. The vulnerability management utilizes technology that seeks out security flaws and tests systems for weak points, allowing the clients to identify and quantify where the network is at risk and to prevent unnecessary weak points. These factors are expected to increase the demand for these solutions.

To Understand How COVID-19 Impact is Covered in This Report. Get sample copy of the report at –https://www.marketreportsworld.com/enquiry/request-covid19/13517535

Key Market Trends

BFSI Segment is Expected to Hold the Major Market Share

– The BFSI sector faced with a number of data breaches and cyber attacks, owing to the large customer base that the industry serves. Data breaches result in the increased corrective measures costs and loss of valuable customer information. For instance, in the latest past, Taiwan’s Far Eastern International Bank incurred a loss of around 60 million by malware.

– With the aim to secure the IT processes and systems, secure customer critical data and comply with government regulations, both private and public banking institutes are focused on implementing the latest technology to prevent cyber attacks.

– The growing technological penetration coupled with digital channels, such as internet banking, mobile banking, becoming the preferred choice of customers for banking services, there is a greater need for banks to leverage advanced authentication and access control processes.

Asia-Pacific is Expected to Grow at the Fastest Rate

– Huge population base and accessibility to the internet has helped Asia-Pacific emerge as a market prone to cyber threats. Most of the countries in the region has not a very evolved cybersecurity regulations. For instance, Indonesia is a world leader in VPN usage, in terms of the present population with its application across the region. Further, the growing penetration of the internet has made it highly vulnerable to cyber attacks.

– However, there remains a huge gap in cybercrime legislation compared to that in North America and Europe, where the lack of awareness and knowledge of basic security make most of these online transactions highly susceptible to digital theft.

– With large MNCs rushing to invest, the region is now witnessing increased spending in cybersecurity solutions, particularly in SMBs and large organizations. The businesses that have other MNCs as a part of the value chain has also assisted in the adoption in the region is expected to continue to drive the demand.

Competitive Landscape

The security and vulnerability management market is moderately competitive and consists of several major players. Most of the players currently dominate the market, however, with innovative and sustainable packaging, most of the companies are increasing their market presence thereby expanding their business footprint towards tapping the new markets.

– June 2018 – Symantec Corp. announced new innovations and enhancements to its Network Security for the Cloud Generation solution, designed to protect enterprise devices, anywhere their employees work or travel, across the network, the cloud, mobile and traditional endpoints.

– November 2017 – IBM announced the successful testing of a fully integrated Wavelength Division Multiplexing (WDM) Si photonics chip for Big Data and cloud services, enabling the get of an entire HD digital movie in two seconds.

Enquire before Purchasing this report at-https://www.marketreportsworld.com/enquiry/pre-order-enquiry/13517535

Regional Analysis: –

– North America

– Asia-Pacific

– Europe

– South America

– Africa

Some Major Points from TOC: –

1.1 Study Deliverables
1.2 Study Assumptions
1.3 Scope of the Study



4.1 Market Overview
4.2 Introduction to Market Drivers and Restraints
4.3 Market Drivers
4.3.1 Increasing Number of Cyber Attacks
4.3.2 Growing Adoption of Cloud Computing by Enterprises
4.4 Market Restraints
4.4.1 Lack of Awareness Toward SVM Solutions
4.4.2 Scalability and Deployment Costs
4.5 Industry Value Chain Analysis
4.6 Industry Attractiveness – Porter’s Five Force Analysis
4.6.1 Threat of New Entrants
4.6.2 Bargaining Power of Buyers/Consumers
4.6.3 Bargaining Power of Suppliers
4.6.4 Threat of Substitute Products
4.6.5 Intensity of Competitive Rivalry

5.1 By Size of the Organization
5.1.1 Small and Medium Enterprises
5.1.2 Large Enterprises
5.2 By End-user Vertical
5.2.1 Aerospace, Defense, and Intelligence
5.2.2 BFSI
5.2.3 Healthcare
5.2.4 Manufacturing
5.2.5 Retail
5.2.6 IT and Telecommunication
5.2.7 Other End-user Verticals
5.3 Geography
5.3.1 North America
5.3.2 Europe
5.3.3 Asia-Pacific
5.3.4 Latin America
5.3.5 Middle East and Africa

6.1 Company Profiles
6.1.1 IBM Corporation
6.1.2 Qualys Inc.
6.1.3 Hewlett Packard Enterprise Company
6.1.4 Dell EMC
6.1.5 Tripwire Inc.
6.1.6 Symantec Corporation
6.1.7 McAfee Inc.
6.1.8 Micro Focus International PLC
6.1.9 Rapid7 Inc.
6.1.10 Fujitsu Limited
6.1.11 Alien Vault Inc.
6.1.12 Skybox Security Inc.



Browse complete table of contents at –https://www.marketreportsworld.com/TOC/13517535

About Us:Market is changing rapidly with the ongoing expansion of the industry. Advancement in the technology has provided today’s businesses with multifaceted advantages resulting in daily economic shifts. Thus, it is very important for a company to comprehend the patterns of the market movements in order to strategize better. An efficient strategy offers the companies with a head start in planning and an edge over the competitors. Market Reports World is the credible source for gaining the market reports that will provide you with the lead your business needs.

Contact Us:


Email:[email protected]

Phone:US +(1) 424 253 0946 /UK +(44) 203 239 8187

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Security and Vulnerability Management Market 2022: Size and Value estimated to Reach CAGR of 10% | Latest Trend, Competitors Analysis 2027

Thu, 04 Aug 2022 01:06:00 -0500 TheExpressWire en-US text/html https://www.digitaljournal.com/pr/security-and-vulnerability-management-market-2022-size-and-value-estimated-to-reach-cagr-of-10-latest-trend-competitors-analysis-2027
Killexams : Enterprise innovation: Low Code/No Code democratizes IT Low-Code, No Code (LCNCs) are being used by businesses today to generate value and stimulate innovation across many industries. Enterprises can supply new capabilities quickly and simply on demand without needing to depend on their IT teams. Software development environments make it possible for people with little or no professional coding knowledge to design and change programs. The platform will be used more frequently, according to 60% of low-code/no-code users.

Businesses are increasingly depending on cutting-edge solutions like low-code/no-code (LCNC) platforms because they want to build apps quickly as they embark on their digital transformation journeys. These platforms, which demand a minimal level of technical expertise, are rapidly gaining popularity among businesses in a variety of industries that want to easily and quickly build their own apps. “This trend has also given birth to ‘citizen developers’ which has been instrumental for many organizations to bridge their IT skills gap.”, observes P Saravanan, Vice-President, Cloud Engineering, Oracle India.

Factors driving the adoption of LCNCs

“Rapid Automation and shortage of talented/skilled developers are the key factors driving LCNC. The latest pandemic also has pushed all the companies towards digital transformation with greater speed”, says Mitesh Shah, Vice President, SAP BTP Application Core Products & Services.

The growing need for businesses to respond with agility and speed to changing market dynamics has led to an increased adoption of LCNC approach. Project timelines come down from months to days leading to faster product rollouts. “LCNC approach involves smaller teams, fewer resources, lower infrastructure or low maintenance costs, and better ROI with faster agile releases making it more cost-effective than from-scratch development”, Vishal Chahal, Director IBM Automation, IBM India Software Labs adds.

The current macroeconomic climate has tightened financial constraints for enterprises everywhere. Companies are therefore seeking application development methods that are affordable, which LCNC provides.

The post-pandemic scenario and the requirement for organisations to develop resilience have sped up the adoption of technology; this has led to what we also refer to as compressed transformation—the simultaneous transformation of several organisational components.

Then, there is the demand for agility and experimentation skills as firms engage in rapid transformation and create cutting-edge apps to support their company and workforce development agenda. LCNC has never before seen agility in the development of contemporary multi-channel experiences. “It also helps organizations address the talent gap as skilled technology talent is becoming harder to find, and LCNC developers can help organizations tap into diversified talent that brings business expertise”, Raghavan Iyer, Senior Managing Director, Innovation Lead - Integrated Global Services, Accenture Technology opines.

Accelerating enterprise innovation

LCNCs are designed to harness the power of the cloud and data in order to let business users create applications that provide unique innovations to transform operations, experiences, and supply operational efficiencies and insights. . The inclusion of industry accelerators and interfaces with the digital core in LCNC platforms creates a myriad of opportunities for applying data to innovative and disruptive applications. One of LCNC's main advantages is that it recruits those who are most ideally situated to effect change. “Citizen developers can closely collaborate with professional developers and IT experts to create enterprise class applications to experiment and develop applications”, Iyer adds.

According to a Gartner estimate, 70 percent of new apps would be developed by market participants using low-code and no-code platforms by 2025. Programming expertise may not be as crucial in the future as LCNC technologies automate the process of creating new apps. “This will eventually free up developers to focus on the development for niche areas”, Shah explains. Nowadays, rather than being predominantly driven by technology professionals, enterprise innovation focuses on boosting customer experiences, increasing efficiency, and improving business processes. Adoption of the LCNC platform and technologies enables participation in the innovation process from a variety of workforce segments, particularly those with domain expertise.

Bridging the IT skills gap

With the help of LCNC, businesses can stop relying on IT teams to implement and develop new solutions, and business users are given the tools they need to become change agents. Professional developers can concentrate on more intricate, inventive, and feature-rich innovations by using low code approaches that automate the fundamental routines. No Code enables business users (or citizen developers) to investigate and test out novel solutions despite having little to no coding experience.

Enterprises now want every bit of talent and expertise they can acquire to meet the demands of the rapidly changing business environment. The LCNC approach's citizen developers assist firms in addressing the talent shortage, employee attrition, and skill gaps.

Capabilities of organizations

IBM has built LCNC capabilities in its platforms for an end to end coverage from development and deployment to the management of solutions. “IBM Automation platforms provide AI-driven capability to manage and automate both IT systems and business processes through the LCNC approach. Using technology like Turbonomics and Instana along with Watson AIOps, users are able to automate the observability, optimization, and remediation of their hybrid cloud solutions with low to no coding requirements, monitor their IT systems while getting AI-driven actions for reducing cost and performing dynamic optimization to upscale or downscale their systems with no coding and minimal IT support”, remarked Vishal.

Oracle’s primary offering, Oracle APEX, a low code platform, is accepted for enterprise apps across the world. Saravanan adds “APEX provides users to build enterprise apps 20x faster and with 100x less code. Businesses are also becoming aware of the value of LCNC in India.”.

At Accenture, there are large communities of practitioners on LCNC cutting across hyperscalers, core platforms and pureplay development platforms.“We have built a global practice of LCNC that creates thousands of applications for ourselves and our clients.”, says Iyer.

SAP Labs India is developing the core services behind the LCNC products of SAP. “LCNC core services are providing the unification across the various LCNC offerings of SAP. Additionally in the area of Process Automation, Labs India teams are playing a significant role”, Shah states .

With the increasing move to the LCNC approach , technology is now more readily available to all employees inside the company, improving communication between IT and business divisions and allowing for the development of solutions that are more suited to corporate requirements. Adoption of such platforms can also aid in bridging the skill shortage in the IT sector as it enables businesses to tap into talent pools outside of their usual boundaries.

Tue, 19 Jul 2022 21:07:00 -0500 en text/html https://cio.economictimes.indiatimes.com/news/next-gen-technologies/enterprise-innovation-low-code/no-code-democratizes-it/92992994
Killexams : Java Development Definitions
  • A

    abstract class

    In Java and other object oriented programming (OOP) languages, objects and classes (categories of objects) may be abstracted, which means that they are summarized into characteristics that are relevant to the current program’s operation.

  • AJAX (Asynchronous JavaScript and XML)

    AJAX (Asynchronous JavaScript and XML) is a technique aimed at creating better and faster interactive web apps by combining several programming tools including JavaScript, dynamic HTML (DHTML) and Extensible Markup Language (XML).

  • Apache Camel

    Apache Camel is a Java-based framework that implements messaging patterns in Enterprise Integration Patterns (EIP) to provide a rule-based routing and mediation engine enterprise application integration (EAI).

  • AWS SDK for Java

    The AWS SDK for Java is a collection of tools for developers creating Java-based Web apps to run on Amazon cloud components such as Amazon Simple Storage Service (S3), Amazon Elastic Compute Cloud (EC2) and Amazon SimpleDB.

  • AWS SDK for JavaScript

    The AWS SDK for JavaScript is a collection of software tools for the creation of applications and libraries that use Amazon Web Services (AWS) resources.

  • B

    bitwise operator

    Because they allow greater precision and require fewer resources, bitwise operators, which manipulate individual bits, can make some code faster and more efficient. Applications of bitwise operations include encryption, compression, graphics, communications over ports/sockets, embedded systems programming and finite state machines.

  • C


    Compositing used to create layered images and video in advertisements, memes and other content for print publications, websites and apps. Compositing techniques are also used in video game development, augmented reality and virtual reality.

  • const

    Const (constant) in programming is a keyword that defines a variable or pointer as unchangeable.

  • CSS (cascading style sheets)

    This definition explains the meaning of cascading style sheets (CSS) and how using them with HTML pages is a user interface (UI) development best practice that complies with the separation of concerns design pattern.

  • E

    embedded Tomcat

    An embedded Tomcat server consists of a single Java web application along with a full Tomcat server distribution, packaged together and compressed into a single JAR, WAR or ZIP file.

  • EmbeddedJava

    EmbeddedJava is Sun Microsystems' software development platform for dedicated-purpose devices with embedded systems, such as products designed for the automotive, telecommunication, and industrial device markets.

  • encapsulation in Java

    Java offers four different "scope" realms--public, protected, private, and package--that can be used to selectively hide data constructs. To achieve encapsulation, the programmer declares the class variables as “private” and then provides what are called public “setter and getter” methods which make it possible to view and modify the variables.

  • Enterprise JavaBeans (EJB)

    Enterprise JavaBeans (EJB) is an architecture for setting up program components, written in the Java programming language, that run in the server parts of a computer network that uses the client/server model.

  • exception handler

    In Java, checked exceptions are found when the code is compiled; for the most part, the program should be able to recover from these. Exception handlers are coded to define what the program should do under specified conditions.

  • F

    full-stack developer

    A full-stack developer is a type of programmer that has a functional knowledge of all techniques, languages and systems engineering concepts required in software development.

  • G

    git stash

    Git stash is a built-in command with the distributed version control tool in Git that locally stores all the most latest changes in a workspace and resets the state of the workspace to the prior commit state.

  • GraalVM

    GraalVM is a tool for developers to write and execute Java code.

  • Groovy

    Groovy is a dynamic object-oriented programming language for the Java virtual machine (JVM) that can be used anywhere Java is used.

  • GWT (GWT Web Toolkit)

    The GWT software development kit facilitates the creation of complex browser-based Java applications that can be deployed as JavaScript, for portability across browsers, devices and platforms.

  • H


    Hibernate is an open source object relational mapping (ORM) tool that provides a framework to map object-oriented domain models to relational databases for web applications.

  • HTML (Hypertext Markup Language)

    HTML (Hypertext Markup Language) is a text-based approach to describing how content contained within an HTML file is structured.

  • I


    InstallAnywhere is a program that can used by software developers to package a product written in Java so that it can be installed on any major operating system.

  • IntellJ IDEA

    The free and open source IntellJ IDEA includes JUnit and TestNG, code inspections, code completion, support for multiple refactoring, Maven and Ant build tools, a visual GUI (graphical user interface) builder and a code editor for XML as well as Java. The commercial version, Ultimate Edition, provides more features.

  • inversion of control (IoC)

    Inversion of control, also known as the Hollywood Principle, changes the control flow of an application and allows developers to sidestep some typical configuration hassles.

  • J

    J2ME (Java 2 Platform, Micro Edition)

    J2ME (Java 2 Platform, Micro Edition) is a technology that allows programmers to use the Java programming language and related tools to develop programs for mobile wireless information devices such as cellular phones and personal digital assistants (PDAs).

  • JAR file (Java Archive)

    A Java Archive, or JAR file, contains all of the various components that make up a self-contained, executable Java application, deployable Java applet or, most commonly, a Java library to which any Java Runtime Environment can link.

  • Java

    Java is a widely used programming language expressly designed for use in the distributed environment of the internet.

  • Java abstract class

    In Java and other object oriented programming (OOP) languages, objects and classes may be abstracted, which means that they are summarized into characteristics that are relevant to the current program’s operation.

  • Java annotations

    Within the Java development kit (JDK), there are simple annotations used to make comments on code, as well as meta-annotations that can be used to create annotations within annotation-type declarations.

  • Java assert

    The Java assert is a mechanism used primarily in nonproduction environments to test for extraordinary conditions that will never be encountered unless a bug exists somewhere in the code.

  • Java Authentication and Authorization Service (JAAS)

    The Java Authentication and Authorization Service (JAAS) is a set of application program interfaces (APIs) that can determine the identity of a user or computer attempting to run Java code, and ensure that the entity has the privilege or permission to execute the functions requested... (Continued)

  • Java BufferedReader

    Java BufferedReader is a public Java class that allows large volumes to be read from disk and copied to much faster RAM to increase performance over the multiple network communications or disk reads done with each read command otherwise

  • Java Business Integration (JBI)

    Java Business Integration (JBI) is a specification that defines an approach to implementing a service-oriented architecture (SOA), the underlying structure supporting Web service communications on behalf of computing entities such as application programs or human users... (Continued)

  • Java Card

    Java Card is an open standard from Sun Microsystems for a smart card development platform.

  • Java Champion

    The Java Champion designation is awarded to leaders and visionaries in the Java technology community.

  • Java chip

    The Java chip is a microchip that, when included in or added to a computer, will accelerate the performance of Java programs (including the applets that are sometimes included with Web pages).

  • Java Comparator

    Java Comparator can compare objects to return an integer based on a positive, equal or negative comparison. Since it is not limited to comparing numbers, Java Comparator can be set up to order lists alphabetically or numerically.

  • Java compiler

    Generally, Java compilers are run and pointed to a programmer’s code in a text file to produce a class file for use by the Java virtual machine (JVM) on different platforms. Jikes, for example, is an open source compiler that works in this way.

  • Java Cryptography Extension (JCE)

    The Java Cryptography Extension (JCE) is an application program interface (API) that provides a uniform framework for the implementation of security features in Java.

  • Java Data Objects (JDO)

    Java Data Objects (JDO) is an application program interface (API) that enables a Java programmer to access a database implicitly - that is, without having to make explicit Structured Query Language (SQL) statements.

  • Java Database Connectivity (JDBC)

    Java Database Connectivity (JDBC) is an API packaged with the Java SE edition that makes it possible to connect from a Java Runtime Environment (JRE) to external, relational database systems.

  • Java Development Kit (JDK)

    The Java Development Kit (JDK) provides the foundation upon which all applications that are targeted toward the Java platform are built.

  • Java Flight Recorder

    Java Flight Recorder is a Java Virtual Machine (JVM) profiler that gathers performance metrics without placing a significant load on resources.

  • Java Foundation Classes (JFC)

    Using the Java programming language, Java Foundation Classes (JFC) are pre-written code in the form of class libraries (coded routines) that supply the programmer a comprehensive set of graphical user interface (GUI) routines to use.

  • Java IDE

    Java IDEs typically provide language-specific features in addition to the code editor, compiler and debugger generally found in all IDEs. Those elements may include Ant and Maven build tools and TestNG and JUnit testing.

  • Java keyword

    Java keywords are terms that have special meaning in Java programming and cannot be used as identifiers for variables, classes or other elements within a Java program.

  • Java Message Service (JMS)

    Java Message Service (JMS) is an application program interface (API) from Sun Microsystems that supports the formal communication known as messaging between computers in a network.

  • Java Mission Control

    Java Mission Control is a performance-analysis tool that renders sampled JVM metrics in easy-to-understand graphs, tables, histograms, lists and charts.

  • Java Platform, Enterprise Edition (Java EE)

    The Java Platform, Enterprise Edition (Java EE) is a collection of Java APIs owned by Oracle that software developers can use to write server-side applications. It was formerly known as Java 2 Platform, Enterprise Edition, or J2EE.

  • Java Runtime Environment (JRE)

    The Java Runtime Environment (JRE), also known as Java Runtime, is the part of the Java Development Kit (JDK) that contains and orchestrates the set of tools and minimum requirements for executing a Java application.

  • Java Server Page (JSP)

    Java Server Page (JSP) is a technology for controlling the content or appearance of Web pages through the use of servlets, small programs that are specified in the Web page and run on the Web server to modify the Web page before it is sent to the user who requested it.

  • Java string

    Strings, in Java, are immutable sequences of Unicode characters. Strings are objects in Java and the string class enables their creation and manipulation.

  • Java virtual machine (JVM)

    A Java virtual machine (JVM), an implementation of the Java Virtual Machine Specification, interprets compiled Java binary code (called bytecode) for a computer's processor (or "hardware platform") so that it can perform a Java program's instructions.


    JAVA_HOME is an operating system (OS) environment variable which can optionally be set after either the Java Development Kit (JDK) or the Java Runtime Environment (JRE) is installed.

  • JavaBeans

    JavaBeans is an object-oriented programming interface from Sun Microsystems that lets you build re-useable applications or program building blocks called components that can be deployed in a network on any major operating system platform.

  • JavaFX

    JavaFX is a software development platform for the creation of both desktop aps and rich internet applications (RIAs) that can run on various devices. The name is a short way of typing "Java Effects."

  • JavaScript

    JavaScript is a programming language that started off simply as a mechanism to add logic and interactivity to an otherwise static Netscape browser.

  • JAX-WS (Java API for XML Web Services)

    Java API for XML Web Services (JAX-WS) is one of a set of Java technologies used to develop Web services... (Continued)

  • JBoss

    JBoss is a division of Red Hat that provides support for the JBoss open source application server program and related middleware services marketed under the JBoss Enterprise Middleware brand.

  • JDBC Connector (Java Database Connectivity Connector)

    The JDBC (Java Database Connectivity) Connector is a program that enables various databases to be accessed by Java application servers that are run on the Java 2 Platform, Enterprise Edition (J2EE) from Sun Microsystems.

  • JDBC driver

    A JDBC driver (Java Database Connectivity driver) is a small piece of software that allows JDBC to connect to different databases. Once loaded, a JDBC driver connects to a database by providing a specifically formatted URL that includes the port number, the machine and database names.

  • JHTML (Java within Hypertext Markup Language)

    JHTML (Java within Hypertext Markup Language) is a standard for including a Java program as part of a Web page (a page written using the Hypertext Markup Language, or HTML).

  • Jikes

    Jikes is an open source Java compiler from IBM that adheres strictly to the Java specification and promises an "extremely fast" compilation.

  • JMX (Java Management Extensions)

    JMX (Java Management Extensions) is a set of specifications for application and network management in the J2EE development and application environment.

  • JNDI (Java Naming and Directory Interface)

    JNDI (Java Naming and Directory Interface) enables Java platform-based applications to access multiple naming and directory services.

  • JOLAP (Java Online Analytical Processing)

    JOLAP (Java Online Analytical Processing) is a Java application-programming interface (API) for the Java 2 Platform, Enterprise Edition (J2EE) environment that supports the creation, storage, access, and management of data in an online analytical processing (OLAP) application.

  • jQuery

    jQuery is an open-sourced JavaScript library that simplifies creation and navigation of web applications.

  • JRun

    JRun is an application server from Macromedia that is based on Sun Microsystems' Java 2 Platform, Enterprise Edition (J2EE).

  • JSON (Javascript Object Notation)

    JSON (JS Object Notation) is a text-based, human-readable data interchange format used for representing simple data structures and objects in Web browser-based code. JSON is also sometimes used in desktop and server-side programming environments. (Continued....)

  • JTAPI (Java Telephony Application Programming Interface)

    JTAPI (Java Telephony Application Programming Interface) is a Java-based application programming interface (API) for computer telephony applications.

  • just-in-time compiler (JIT)

    A just-in-time (JIT) compiler is a program that turns bytecode into instructions that can be sent directly to a computer's processor (CPU).

  • Jython

    Jython is an open source implementation of the Python programming language, integrated with the Java platform.

  • K

    Kebab case

    Kebab case -- or kebab-case -- is a programming variable naming convention where a developer replaces the spaces between words with a dash.

  • M

    MBean (managed bean)

    In the Java programming language, an MBean (managed bean) is a Java object that represents a manageable resource, such as an application, a service, a component, or a device.

  • Morphis

    Morphis is a Java -based open source wireless transcoding platform from Kargo, Inc.

  • N


    NetBeans is a Java-based integrated development environment (IDE). The term also refers to the IDE’s underlying application platform framework. 

  • O

    object-relational mapping (ORM)

    Object-relational mapping (ORM) is a mechanism that makes it possible to address, access and manipulate objects without having to consider how those objects relate to their data sources...(Continued)

  • Open Service Gateway Initiative (OSGi)

    OSGi (Open Service Gateway Initiative) is an industry plan for a standard way to connect devices such as home appliances and security systems to the Internet.

  • OpenJDK

    OpenJDK is a free, open-source version of the Java Development Kit for the Java Platform, Standard Edition (Java SE).

  • P

    Pascal case

    Pascal case is a naming convention in which developers start each new word in a variable with an uppercase letter.

  • prettyprint

    Prettyprint is the process of converting and presenting source code or other objects in a legible and attractive way.

  • R

    Remote Method Invocation (RMI)

    RMI (Remote Method Invocation) is a way that a programmer, using the Java programming language and development environment, can write object-oriented programming in which objects on different computers can interact in a distributed network.

  • S

    Snake case

    Snake case is a naming convention where a developer replaces spaces between words with an underscore.

  • SQLJ

    SQLJ is a set of programming extensions that allow a programmer using the Java programming language to embed statements that provide SQL (Structured Query Language) database requests.

  • Sun Microsystems

    Sun Microsystems (often just called "Sun"), the leading company in computers used as Web servers, also makes servers designed for use as engineering workstations, data storage products, and related software.

  • T


    Tomcat is an application server from the Apache Software Foundation that executes Java servlets and renders Web pages that include Java Server Page coding.

  • X

    XAML (Extensible Application Markup Language)

    XAML, Extensible Application Markup Language, is Microsoft's XML-based language for creating a rich GUI, or graphical user interface. XAML supports both vector and bitmap types of graphics, as well as rich text and multimedia files.

  • Wed, 13 Jul 2022 16:56:00 -0500 en text/html https://www.theserverside.com/definitions
    Killexams : History of Artificial Intelligence

    Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future.

    Whether you’re a burgeoning start-up or an industry titan like Microsoft, there’s probably at least one part of your company working with AI or machine learning. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021.

    AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that.

    The broad strokes of AI’s history, such as the Turing Test and chess computers, are ingrained in the popular consciousness, but a rich, dense history lives beneath the surface of common knowledge. This article will distill that history and show you AI’s path from mythical idea to world-altering reality.

    Also see: Top AI Software 

    From Folklore to Fact

    While AI is often considered a cutting-edge concept, humans have been imagining artificial intelligences for millenniums, and those imaginings have had a tangible impact on the advancements made in the field today.

    Prominent mythological examples include the bronze automaton Talos, protector of the island of Crete from Greece, and the alchemical homunculi of the Renaissance period. Characters like Frankenstein’s Monster, HAL 9000 of 2001: A Space Odyssey, and Skynet from the Terminator franchise are just some of the ways we’ve depicted artificial intelligence in modern fiction.

    One of the fictional concepts with the most influence on the history of AI is Isaac Asimov’s Three Laws of Robotics. These laws are frequently referenced when real-world researchers and organizations create their own laws of robotics.

    In fact, when the U.K.’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as a reference point, though stating that Asimov’s Laws “simply don’t work in practice.”

    Microsoft CEO Satya Nadella also made mention of Asimov’s Laws when presenting his own laws for AI, calling them “a good, though ultimately inadequate, start.”

    Also see: The Future of Artificial Intelligence

    Computers, Games, and Alan Turing

    As Asimov was writing his Three Laws in the 1940s, researcher William Grey Walter was developing a rudimentary, analogue version of artificial intelligence. Called tortoises or turtles, these tiny robots could detect and react to light and contact with their plastic shells, and they operated without the use of computers.

    Later in the 1960s, Johns Hopkins University built their Beast, another computer-less automaton which could navigate the halls of the university via sonar and charge itself at special wall outlets when its battery ran low.

    However, artificial intelligence as we know it today would find its progress inextricably linked to that of computer science. Alan Turing’s 1950 paper Computing Machinery and Intelligence, which introduced the famous Turing Test, is still influential today. Many early AI programs were developed to play games, such as Christopher Strachey’s checkers-playing program written for the Ferranti Mark I computer.

    The term “artificial intelligence” itself wasn’t codified until 1956’s Dartmouth Workshop, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, where McCarthy coined the name for the burgeoning field.

    The Workshop was also where Allen Newell and Herbert A. Simon debuted their Logic Theorist computer program, which was developed with the help of computer programmer Cliff Shaw. Designed to prove mathematical theorems the same way a human mathematician would, Logic Theorist would go on to prove 38 of the first 52 theorems found in the Principia Mathematica. Despite this achievement, the other researchers at the conference “didn’t pay much attention to it,” according to Simon.

    Games and mathematics were focal points of early AI because they were easy to apply the “reasoning as search” principle to. Reasoning as search, also called means-ends analysis (MEA), is a problem-solving method that follows three basic steps:

    • Ddetermine the ongoing state of whatever problem you’re observing (you’re feeling hungry).
    • Identify the end goal (you no longer feel hungry).
    • Decide the actions you need to take to solve the problem (you make a sandwich and eat it).

    This early forerunner of AI’s rationale: If the actions did not solve the problem, find a new set of actions to take and repeat until you’ve solved the problem.

    Neural Nets and Natural Languages

    With Cold-War-era governments willing to throw money at anything that might supply them an advantage over the other side, AI research experienced a burst of funding from organizations like DARPA throughout the ’50s and ’60s.

    This research spawned a number of advances in machine learning. For example, Simon and Newell’s General Problem Solver, while using MEA, would generate heuristics, mental shortcuts which could block off possible problem-solving paths the AI might explore that weren’t likely to arrive at the desired outcome.

    Initially proposed in the 1940s, the first artificial neural network was invented in 1958, thanks to funding from the United States Office of Naval Research.

    A major focus of researchers in this period was trying to get AI to understand human language. Daniel Brubow helped pioneer natural language processing with his STUDENT program, which was designed to solve word problems.

    In 1966, Joseph Weizenbaum introduced the first chatbot, ELIZA, an act which Internet users the world over are grateful for. Roger Schank’s conceptual dependency theory, which attempted to convert sentences into basic concepts represented as a set of simple keywords, was one of the most influential early developments in AI research.

    Also see: Data Analytics Trends 

    The First AI Winter

    In the 1970s, the pervasive optimism in AI research from the ’50s and ’60s began to fade. Funding dried up as sky-high promises were dragged to earth by a myriad of the real-world issues facing AI researching. Chief among them was a limitation in computational power.

    As Bruce G. Buchanan explained in an article for AI Magazine: “Early programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” This period, as funding disappeared and optimism waned, became known as the AI Winter.

    The period was marked by setbacks and interdisciplinary disagreements amongst AI researchers. Marvin Minsky and Frank Rosenblatt’s 1969 book Perceptrons discouraged the field of neural networks so thoroughly that very little research was done in the field until the 1980s.

    Then, there was the divide between the so-called “neats” and the “scruffys.” The neats favored the use of logic and symbolic reasoning to train and educate their AI. They wanted AI to solve logical problems like mathematical theorems.

    John McCarthy introduced the idea of using logic in AI with his 1959 Advice Taker proposal. In addition, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was designed specifically as a logic programming language and still finds use in AI today.

    Meanwhile, the scruffys were attempting to get AI to solve problems that required AI to think like a person. In a 1975 paper, Marvin Minsky outlined a common approach used by scruffy researchers, called “frames.”

    Frames are a way that both humans and AI can make sense of the world. When you encounter a new person or event, you can draw on memories of similar people and events to supply you a rough idea of how to proceed, such as when you order food at a new restaurant. You might not know the menu or the people serving you, but you have a general idea of how to place an order based on past experiences in other restaurants.

    From Academia to Industry

    The 1980s marked a return to enthusiasm for AI. R1, an expert system implemented by the Digital Equipment Corporation in 1982, was saving the company a reported $40 million a year by 1986. The success of R1 proved AI’s viability as a commercial tool and sparked interest from other major companies like DuPont.

    On top of that, Japan’s Fifth Generation project, an attempt to create intelligent computers running on Prolog the same way normal computers run on code, sparked further American corporate interest. Not wanting to be outdone, American companies poured funds into AI research.

    Taken altogether, this increase in interest and shift to industrial research resulted in the AI industry ballooning to $2 billion in value by 1988. Adjusting for inflation, that’s nearly $5 billion dollars in 2022.

    Also see: Real Time Data Management Trends

    The Second AI Winter

    In the 1990s, however, interest began receding in much the same way it had in the ’70s. In 1987, Jack Schwartz, the then-new director of DARPA, effectively eradicated AI funding from the organization, yet already-earmarked funds didn’t dry up until 1993.

    The Fifth Generation Project had failed to meet many of its goals after 10 years of development, and as businesses found it cheaper and easier to purchase mass-produced, general-purpose chips and program AI applications into the software, the market for specialized AI hardware, such as LISP machines, collapsed and caused the overall market to shrink.

    Additionally, the expert systems that had proven AI’s viability at the beginning of the decade began showing a fatal flaw. As a system stayed in-use, it continually added more rules to operate and needed a larger and larger knowledge base to handle. Eventually, the amount of human staff needed to maintain and update the system’s knowledge base would grow until it became financially untenable to maintain. The combination of these factors and others resulted in the Second AI Winter.

    Also see: Top Digital Transformation Companies

    Into the New Millennium and the Modern World of AI

    The late 1990s and early 2000s showed signs of the coming AI springtime. Some of AI’s oldest goals were finally realized, such as Deep Blue’s 1997 victory over then-chess world champion Gary Kasparov in a landmark moment for AI.

    More sophisticated mathematical tools and collaboration with fields like electrical engineering resulted in AI’s transformation into a more logic-oriented scientific discipline, allowing the aforementioned neats to claim victory over their scruffy counterparts. Marvin Minsky, for his part, declared that the field of AI was and had been “brain dead” for the past 30 years in 2003.

    Meanwhile, AI found use in a variety of new areas of industry: Google’s search engine algorithm, data mining, and speech recognition just to name a few. New supercomputers and programs would find themselves competing with and even winning against top-tier human opponents, such as IBM’s Watson winning Jeopardy! in 2011 over Ken Jennings, who’d once won 74 episodes of the game show in a row.

    One of the most impactful pieces of AI in latest years has been Facebook’s algorithms, which can determine what posts you see and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar functions can be found on websites like Youtube and Netflix, where they predict what content viewers want to watch next based on previous history.

    The benefits of these algorithms to anyone but these companies’ bottom lines are up for debate, as even former employees have testified before Congress about the dangers it can cause to users.

    Sometimes, these innovations weren’t even recognized as AI. As Nick Brostrom put it in a 2006 CNN interview: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”

    The trend of not calling useful artificial intelligence AI did not last into the 2010s. Now, start-ups and tech mainstays alike scramble to claim their latest product is fueled by AI or machine learning. In some cases, this desire has been so powerful that some will declare their product is AI-powered, even when the AI’s functionality is questionable.

    AI has found its way into many peoples’ homes, whether via the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa. Through winters and burst bubbles, the field of artificial intelligence has persevered and become a hugely significant part of modern life, and is likely to grow exponentially in the years ahead.

    Mon, 25 Jul 2022 09:23:00 -0500 en-US text/html https://www.eweek.com/enterprise-apps/history-of-artificial-intelligence/
    Killexams : Hyperledger Fabric

    What Is Hyperledger Fabric?

    Hyperledger Fabric is a modular blockchain framework that acts as a foundation for developing blockchain-based products, solutions, and applications using plug-and-play components that are aimed for use within private enterprises.

    Key Takeaways

    • Hyperledger is an enterprise-grade, open-source distributed ledger framework launched by the Linux Foundation in December 2016.
    • Fabric is a highly-modular, decentralized ledger technology (DLT) platform that was designed by IBM for industrial enterprise use.
    • Because Hyperledger Fabric is private and requires permission to access, businesses can segregate information (like prices), plus transactions can be sped up because the number of nodes on the network is reduced.
    • Fabric 2.0 was released in January 2020. The main features of this version are faster transactions, updated smart contract technology, and streamlined data sharing.

    Hyperledger Fabric was initiated by Digital Asset and IBM and has now emerged as a collaborative cross-industry venture, which is currently being hosted by the Linux Foundation. Among the several Hyperledger projects, Fabric was the first one to exit the “incubation” stage and achieve the “active” stage in March 2017.

    How Hyperledger Fabric Works

    Traditional blockchain networks can’t support private transactions and confidential contracts that are of utmost importance for businesses. Hyperledger Fabric was designed in response to this as a modular, scalable and secure foundation for offering industrial blockchain solutions.

    Hyperledger Fabric is the open-source engine for blockchain and takes care of the most important features for evaluating and using blockchain for business use cases.

    Within private industrial networks, the verifiable identity of a participant is a primary requirement. Hyperledger Fabric supports memberships based on permission; all network participants must have known identities. Many business sectors, such as healthcare and finance, are bound by data protection regulations that mandate maintaining data about the various participants and their respective access to various data points. Fabric supports such permission-based membership.

    Modular Architecture

    The modular architecture of Hyperledger Fabric separates the transaction processing workflow into three different stages: smart contracts called chaincode that comprise the distributed logic processing and agreement of the system, transaction ordering, and transaction validation and commitment. This segregation offers multiple benefits:

    • A reduced number of trust levels and verification that keeps the network and processing clutter-free
    • Improved network scalability
    • Better overall performance

    Additionally, Hyperledger Fabric’s support for plug-and-play of various components allows for easy reuse of existing features and ready-made integration of various modules. For instance, if a function already exists that verifies the participant’s identity, an enterprise-level network simply needs to plug and reuse this existing module instead of building the same function from scratch.

    The participants on the network have three distinct roles:

    • Endorser
    • Committer
    • Consenter

    In a nutshell, the transaction proposal is submitted to the endorser peer according to the predefined endorsement policy about the number of endorsers required. After sufficient endorsements by the endorser(s), a batch or block of transactions is delivered to the committer(s). Committers validate that the endorsement policy was followed and that there are no conflicting transactions. Once both the checks are made, the transactions are committed to the ledger.

    Image source: IBM

    Since only confirming instructions—such as signatures and read/write set—are sent across the network, the scalability and performance of the network is enhanced. Only endorsers and committers have access to the transaction, and security is improved with a fewer number of participants having access to key data points.

    Example of Hyperledger Fabric

    Suppose there's a manufacturer that wants to ship chocolates to a specific retailer or market of retailers (i.e., all US retailers) at a specific price but does not want to reveal that price in other markets (i.e., Chinese retailers).

    Since the movement of the product may involve other parties, like customs, a shipping company, and a financing bank, the private price may be revealed to all involved parties if a basic version of blockchain technology is used to support this transaction.

    Hyperledger Fabric addresses this issue by keeping private transactions private on the network; only participants who need to know are aware of the necessary details. Data partitioning on the blockchain allows specific data points to be accessible only to the parties who need to know.

    Criticism of Hyperledger Fabric

    The high-water mark of crypto-enthusiasm broke in 2018 after the collapse of the price of bitcoin (which hit its peak on Dec. 17, 2017). Overoptimistic claims about the value of the new technology were replaced with skepticism, and related technologies, including Hyperledger, also suffered from this skepticism.

    Hyperledger Fabric's Competitors

    Hyperledger Fabric competes with other Hyperledger projects like Iroha, Indy, and Sawtooth. It also competes with R3's Corda, which is also a private, permission-based DLT.

    Blockchain service firm Chainstack published a paper in January 2020 that shows development in Corda has been historically higher than development in Fabric, though Fabric development passed Corda's in Q3 2019 when Fabric switched to GitHub.

    The Chainstack report shows that while there are three times as many developers working on Fabric, Corda developers made more than two times as many code contributions, and Fabric developers push far less code per developer than Corda's developers.

    Hyperledger Fabric Is Not Blockchain and Is Not Efficient

    Several critiques of Hyperledger Fabric point out that a permission-based, private blockchain with Hyperledger Fabric's features is not a blockchain, and current non-blockchain technologies are far less expensive and deliver the same amount of security. Cointelegraph's Stuart Popejoy put the case like this:

    Fabric’s architecture is far more complex than any blockchain platform while also being less secure against tampering and attacks. You would think that a “private” blockchain would at least offer scalability and performance, but Fabric fails here as well. Simply put, pilots built on Fabric will face a complex and insecure deployment that won’t be able to scale with their businesses.

    Hyperledger Fabric has also been critiqued for lacking resiliency. A team of researchers from the Sorbonne in Paris and CSIRO - Data61, Australia's national science agency, found that significant network delays reduced the reliability of Fabric: "[B]y delaying block propagation, we demonstrated that Hyperledger Fabric does not provide sufficient consistency guarantees to be deployed in critical environments."

    Hyperledger Fabric 2.0 Released in January 2020

    In January of 2020, Hyperledger Fabric 2.0 was released to address some of the existing criticisms. According to Ron Miller at Techcrunch, "The biggest updates involve forcing agreement among the parties before any new data can be added to the ledger, known as decentralized governance of the smart contracts."

    Although the update isn't a sea-change in the simplicity or applicability of Fabric, it does demonstrate that progress continues to be made in the cryptocurrency industry beyond the crypto-mania that occurred in 2018. Over the next five to ten years, it's expected that enterprise blockchain will undoubtedly find its proper use.

    Tue, 18 Aug 2020 17:37:00 -0500 en text/html https://www.investopedia.com/terms/h/hyperledger-fabric.asp
    Killexams : Navigating the Ins and Outs of a Microservice Architecture (MSA)

    Key takeaways

    • MSA is not a completely new concept, it is about doing SOA correct by utilizing modern technology advancements.
    • Microservices only address a small portion of the bigger picture - architects need to look at MSA as an architecture practice and implement it to make it enterprise-ready.
    • Micro is not only about the size, it is primarily about the scope.
    • Integration is a key aspect of MSA that can implement as micro-integrations when applicable.
    • An iterative approach helps an organization to move from its current state to a complete MSA.

    Enterprises today contain a mix of services, legacy applications, and data, which are topped by a range of consumer channels, including desktop, web and mobile applications. But too often, there is a disconnect due to the absence of a properly created and systematically governed integration layer, which is required to enable business functions via these consumer channels. The majority of enterprises are battling this challenge by implementing a service-oriented architecture (SOA) where application components provide loosely-coupled services to other components via a communication protocol over a network. Eventually, the intention is to embrace a microservice architecture (MSA) to be more agile and scalable. While not fully ready to adopt an MSA just yet, these organizations are architecting and implementing enterprise application and service platforms that will enable them to progressively move toward an MSA.

    In fact, Gartner predicts that by 2017 over 20% of large organizations will deploy self-contained microservices to increase agility and scalability, and it's happening already. MSA is increasingly becoming an important way to deliver efficient functionality. It serves to untangle the complications that arise with the creation services; incorporation of legacy applications and databases; and development of web apps, mobile apps, or any consumer-based applications.

    Today, enterprises are moving toward a clean SOA and embracing the concept of an MSA within a SOA. Possibly the biggest draws are the componentization and single function offered by these microservices that make it possible to deploy the component rapidly as well as scale it as needed. It isn't a novel concept though.

    For instance, in 2011, a service platform in the healthcare space started a new strategy where whenever it wrote a new service, it would spin up a new application server to support the service deployment. So, it's a practice that came from the DevOps side that created an environment with less dependencies between services and ensured a minimum impact to the rest of the systems in the event of some sort of maintenance. As a result, the services were running over 80 servers. It was, in fact, very basic since there were no proper DevOps tools available as there are today; instead, they were using Shell scripts and Maven-type tools to build servers.

    While microservices are important, it's just one aspect of the bigger picture. It's clear that an organization cannot leverage the full benefits of microservices on their own. The inclusion of MSA and incorporation of best practices when designing microservices is key to building an environment that fosters innovation and enables the rapid creation of business capabilities. That's the real value add.

    Addressing Implementation Challenges

    The generally accepted practice when building your MSA is to focus on how you would scope out a service that provides a single-function rather than the size. The inner architecture typically addresses the implementation of the microservices themselves. The outer architecture covers the platform capabilities that are required to ensure connectivity, flexibility and scalability when developing and deploying your microservices. To this end, enterprise middleware plays a key role when crafting both your inner and outer architectures of the MSA.

    First, middleware technology should be DevOps-friendly, contain high-performance functionality, and support key service standards. Moreover, it must support a few design fundamentals, such as an iterative architecture, and be easily pluggable, which in turn will provide rapid application development with continuous release. On top of these, a comprehensive data analytics layer is critical for supporting a design for failure.

    The biggest mistake enterprises often make when implementing an MSA is to completely throw away established SOA approaches and replace them with the theory behind microservices. This results in an incomplete architecture and introduces redundancies. The smarter approach is to consider an MSA as a layered system that includes an enterprise service bus (ESB) like functionality to handle all integration-related functions. This will also act as a mediation layer that enables changes to occur at this level, which can then be applied to all relevant microservices. In other words, an ESB or similar mediation engine enables a gradual move toward an MSA by providing the required connectivity to merge legacy data and services into microservices. This approach is also important for incorporating some fundamental rules by launching the microservice first and then exposing it via an API.

    Scoping Out and Designing the 'Inner Architecture'

    Significantly, the inner architecture needs to be simple, so it can be easily and independently deployable and independently disposable. Disposability is required in the event that the microservice fails or a better service emerges; in either case, there is a requirement for the respective microservice to be easily disposed. The microservice also needs to be well supported by the deployment architecture and the operational environment in which the microservice is built, deployed, and executed. Therefore, it needs to be simple enough to be independently deployable. An ideal example of this would be releasing a new version of the same service to introduce bug fixes, include new features or enhancements to existing features, and to remove deprecated services.

    The key requirements of an MSA inner architecture are determined by the framework on which the MSA is built. Throughput, latency, and low resource usage (memory and CPU cycles) are among the key requirements that need to be taken into consideration. A good microservice framework typically will build on lightweight, fast runtime, and modern programming models, such as an annotated meta-configuration that's independent from the core business logic. Additionally, it should offer the ability to secure microservices using desired industry leading security standards, as well as some metrics to monitor the behavior of microservices.

    With the inner architecture, the implementation of each microservice is relatively simple compared to the outer architecture. A good service design will ensure that six factors have been considered when scoping out and designing the inner architecture:

    First, the microservice should have a single purpose and single responsibility, and the service itself should be delivered as a self-contained unit of deployment that can create multiple instances at the runtime for scale.

    Second, the microservice should have the ability to adopt an architecture that's best suited for the capabilities it delivers and one that uses the appropriate technology.

    Third, once the monolithic services are broken down into microservices, each microservice or set of microservices should have the ability to be exposed as APIs. However, within the internal implementation, the service could adopt any suitable technology to deliver that respective business capability by implementing the business requirement. To do this, the enterprise may want to consider something like Swagger to define the API specification or API definition of a particular microservice, and the microservice can use this as the point of interaction. This is referred to as an API-first approach in microservice development.

    Fourth, with units of deployment, there may be options, such as self-contained deployable artifacts bundled in hypervisor-based images, or container images, which are generally the more popular option.

    Fifth, the enterprise needs to leverage analytics to refine the microservice, as well as to provision for recovery in the event the service fails. To this end, the enterprise can incorporate the use of metrics and monitoring to support this evolutionary aspect of the microservice.

    Sixth, even though the microservice paradigm itself enables the enterprise to have multiple or polyglot implementations for its microservices, the use of best practices and standards is essential for maintaining consistency and ensuring that the solution follows common enterprise architecture principles. This is not to say that polyglot opportunities should not be completely vetoed; rather they need to be governed when used.

    Addressing Platform Capabilities with the 'Outer Architecture'

    Once the inner architecture has been set up, architects need to focus on the functionality that makes up the outer architecture of their MSA. A key component of the outer architecture is the introduction of an enterprise service bus (ESB) or similar mediation engine that will aide with the connecting legacy data and services into MSA. A mediation layer will also enable the enterprise to maintain its own standards while others in the ecosystem manage theirs.

    The use of a service registry will support dependency management, impact analysis, and discovery of the microservices and APIs. It also will enable streamlining of service/API composition and wire microservices into a service broker or hub. Any MSA should also support the creation of RESTful APIs that will help the enterprise to customize resource models and application logic when developing apps.

    By sticking to the basics of designing the API first, implementing the microservice, and then exposing it via the API, the API rather than the microservice becomes consumable. Another common requirement enterprises need to address is securing microservices. In a typical monolithic application, an enterprise would use an underlying repository or user store to populate the required information from the security layer of the old architecture. In an MSA, an enterprise can leverage widely-adopted API security standards, such as OAuth2 and OpenID Connect, to implement a security layer for edge components, including APIs within the MSA.

    On top of all these capabilities, what really helps to untangle MSA complexities is the use of an underlying enterprise-class platform that provides rich functionality while managing scalability, availability, and performance. That is because the breaking down of a monolithic application into microservices doesn't necessarily amount to a simplified environment or service. To be sure, at the application level, an enterprise essentially is dealing with several microservices that are far more simple than a single monolithic, complicated application. Yet, the architecture as a whole may not necessarily be less arduous.

    In fact, the complexity of an MSA can be even greater given the need to consider the other aspects that come into play when microservices need to talk to each other versus simply making a direct call within a single process. What this essentially means is that the complexity of the system moves to what is referred to as the "outer architecture", which typically consists of an API gateway, service routing, discovery, message channel, and dependency management.

    With the inner architecture now extremely simplified--containing only the foundation and execution runtime that would be used to build a microservice--architects will find that the MSA now has a clean services layer. More focus then needs to be directed toward the outer architecture to address the prevailing complexities that have arisen. There are some common pragmatic scenarios that need to be addressed as explained in the diagram below.

    The outer architecture will require an API gateway to help it expose business APIs internally and externally. Typically, an API management platform will be used for this aspect of the outer architecture. This is essential for exposing MSA-based services to consumers who are building end-user applications, such as web apps, mobile apps, and IoT solutions.

    Once the microservices are in place, there will be some sort of service routing that takes place in which the request that comes via APIs will be routed to the relevant service cluster or service pod. Within microservices themselves, there will be multiple instances to scale based on the load. Therefore, there's a requirement to carry out some form of load balancing as well.

    Additionally, there will be dependencies between microservices--for instance, if microservice A has a dependency on microservice B, it will need to invoke microservice B at runtime. A service registry addresses this need by enabling services to discover the endpoints. The service registry will also manage the API and service dependencies as well as other assets, including policies.

    Next, the MSA outer architecture needs some messaging channels, which essentially form the layer that enables interactions within services and links the MSA to the legacy world. In addition, this layer helps to build a communication (micro-integration) channel between microservices, and these channels should be lightweight protocols, such as HTTP, MQTT, among others.

    When microservices talk to each other, there needs to be some form of authentication and authorization. With monolithic apps, this wasn't necessary because there was a direct in-process call. By contrast, with microservices, these translate to network calls. Finally, diagnostics and monitoring are key aspects that need to be considered to figure out the load type handled by each microservice. This will help the enterprise to scale up microservices separately.

    Reviewing MSA Scenarios

    To put things into perspective, let's analyze some actual scenarios that demonstrate how the inner and outer architecture of an MSA work together. We'll assume an organization has implemented its services using Microsoft Windows Communication Foundation or the Java JEE/J2EE service framework, and developers there are writing new services using a new microservices framework by applying the fundamentals of MSA.

    In such a case, the existing services that expose the data and business functionality cannot be ignored. As a result, new microservices will need to communicate with the existing service platforms. In most cases, these existing services will use the standards adhered to by the framework. For instance, old services might use service bindings, such as SOAP over HTTP, Java Message Service (JMS) or IBM MQ, and secured using Kerberos or WS-Security. In this example, messaging channels too will play a big role in protocol conversions, message mediation, and security bridging from the old world to the new MSA.

    Another aspect the organization would need to consider is any impact to its scalability efforts in terms of business growth given the prevalent limitations posed by a monolithic application, whereas an MSA is horizontally scalable. Among some obvious limitations are possible errors as it's cumbersome to test new features in a monolithic environment and delays to implement these changes, hampering the need to meet immediate requirements. Another challenge would be supporting this monolithic code base given the absence of a clear owner; in the case of microservices, individual or single functions can be managed on their own and each of these can be expanded as required quickly without impacting other functions.

    In conclusion, while microservices offer significant benefits to an organization, adopting an MSA in a phased out or iterative manner may be the best way to move forward to ensure a smooth transition. Key aspects that make MSA the preferred service-oriented approach is clear ownership and the fact that it fosters failure isolation, thereby enabling these owners to make services within their domains more stable and efficient.

    About the Author

    Asanka Abeysinghe is vice president of solutions architecture at WSO2. He has over 15 years of industry experience, which include implementing projects ranging from desktop and web applications through to highly scalable distributed systems and SOAs in the financial domain, mobile platforms, and business integration solutions. His areas of specialization include application architecture and development using Java technologies, C/C++ on Linux and Windows platforms. He is also a committer of the Apache Software Foundation.

    Mon, 26 Dec 2016 20:00:00 -0600 en text/html https://www.infoq.com/articles/navigating-microservices-architecture/
    Killexams : Prolifics Acquires Tier 2 Consulting Limited


    The page may have moved, you may have mistyped the address, or followed a bad link.

    Visit our homepage, or search for whatever you were looking for…

    Mon, 04 Jul 2022 00:02:00 -0500 en text/html https://www.bakersfield.com/ap/news/prolifics-acquires-tier-2-consulting-limited/article_cbba0a7b-782c-5d69-8ba6-82efef64fd30.html
    Killexams : SD Times news digest: Boomi Blueprint Framework for Data Management, Microsoft to end Windows’ PHP 7.2 support, and Instana enterprise enhancements

    Boomi’s Blueprint framework includes leadership guidance, design practice, and implementation practices. 

    “This set of best practices provides companies with the ability to respond to disruptive forces and quickly adapt their digital platform towards desired business vision and outcomes,” Boomi wrote in a post.

    In addition, leadership guidance provides the Digital Ideation Lab, a Boomi innovation pop-up lab where digital certified will explore technologies, develop prototypes, and create reference architectures for rapid business deployment.

    Microsoft to end Windows’ PHP 7.2 support
    Microsoft stated that PHP 7.2 will go out of support this November. 

    Meanwhile, PHP 7.3 will be going into security fix mode fix mode in November, and 7.4 will have two more years of support from that point. 

    Microsoft will not support PHP for Windows in any capacity for version 8.0 and beyond.

    Instana enterprise enhancements
    Instana launched enterprise enhancements to help organizations manage mission critical applications more effectively. 

    New features include custom dashboards, NGINX Tracing, the IBM MQ Monitoring Sensor and Redis Enterprise Monitoring Sensor, and role-based access control.

    “Unlike traditional APM tools, Instana’s automated APM solution discovers all application service components and application infrastructure, including infrastructure such as AWS Lambda, Kubernetes and Docker,” Instana wrote in a post.

    UiPath announces $225 million funding
    UiPath said it will use the funding to invest more in its research and development of automation solutions. 

    “We will advance our market-leading platform and will continue to deepen our investments in AI-powered innovation and expanded cloud offerings,” said Daniel Pines, the co-founder and CEO of UiPath. “COVID-19 has heightened the critical need of automation to address challenges and create value in days and weeks, not months and years. We are committed to working harder to help our customers evolve, transform, and succeed fast in the new normal.”

    UiPath released its end-to-end hyperautomation platform in May 2020. Additional details are available here.

    Apache weekly updates
    New releases from Apache last week included Apache JackRabbit 2.21.2, an unstable release cut directly from the trunk with a focus on new features and other improvements. 

    This week also saw Apache Tomcat 7.0.105, 8.5.57, 9.0.37, and 10.0.0-M7 released, containing a number of bug fixes and improvements compared to version 7.0.104. 

    ApacheCon is set for an online event on 29 September – 1 October. Additional details are available here.

    Tue, 12 Jul 2022 12:00:00 -0500 en-US text/html https://sdtimes.com/data/sd-times-news-digest-boomi-blueprint-framework-for-data-management-microsoft-to-end-windows-php-7-2-support-and-instana-enterprise-enhancements/
    Killexams : SingleStore announces $116M financing led by Goldman Sachs Asset Management

    SingleStore, the cloud-native database built for speed and scale to power data-intensive applications, today announced it has raised $116 million in financing led by the growth equity business within Goldman Sachs Asset Management (Goldman Sachs) with new participation from Sanabil Investments. Current investors Dell Technologies Capital, GV, Hewlett Packard Enterprise (HPE), IBM ventures and Insight Partners, among others, also participated in the round.

    “By unifying different types of workloads in a single database, SingleStore supports modern applications, which frequently run real-time analytics on transactional data,” said Holger Staude, managing director at Goldman Sachs. “The company aims to help organizations overcome the challenges of data intensity across multi-cloud, hybrid and on-prem environments, and we are excited to support SingleStore as it enters a new phase of growth.”

    “Our purpose is to unify and simplify modern data,” said SingleStore CEO Raj Verma. “We believe the future is real time, and the future demands a fast, unified and high-reliability database — all aspects in which we are strongly differentiated. I am very excited to partner with Goldman Sachs, the beacon of financial institutions, and further expand our relationship.”

    “At Siemens Global Business Services, we rely on SingleStore to drive our Pulse platform, which requires us to process massive amounts of data from disparate sources,” said Christoph Malassa, Head of Analytics and Intelligence Solutions, Siemens. “The speed and scalability SingleStore provides has allowed us to better serve both our customers and our internal team, and to expand our capabilities along with them, e.g. enabling online analytics that previously had to be conducted offline.”

    The funding comes on the heels of the company’s latest onboarding of its new chief financial officer, Brad Kinnish and today, the company is pleased to welcome Meaghan Nelson as its new general counsel. These two strategic executive hires infuse a great depth of experience to the C-suite, making it even more equipped to explore future paths for company growth.

    “I am beyond thrilled to join the team at SingleStore,” said Kinnish. “It’s such an exciting time in the database industry. Major forces such as the rise in cloud and the blending of operational and transactional workloads are causing a third wave of disruption in the way data is managed. SingleStore by design is a leader in the market, and I am confident we will achieve a lot in the coming year.”

    SingleStore’s new general counsel, Meaghan Nelson, brings over a decade of legal experience to SingleStore, including her latest role as associate general counsel at SaaS company, Veeva Systems, as well as prior roles in private practice taking companies such as MaxPoint Interactive, Etsy, Natera and Veeva through their IPOs.

    “I couldn’t be more excited to join SingleStore at this important inflection point for the company,” said Nelson. “I feel that my deep experience working closely with companies through the IPO process along with my experience in scaling G&A orgs will be of great value to SingleStore as we continue to achieve new heights.”

    Previous investments from IBM ventures, HPE and Dell have fueled SingleStore’s strong momentum. It recently launched SingleStoreDB with IBM as well as announced a partnership with SAS to deliver ultra-fast insights at lower costs. The company has almost doubled its headcount in the last 12 months and continues to aggressively hire to meet the demand for its product and services.

    This funding follows SingleStore’s recent product release that empowers customers to create fast and interactive applications at scale and in real time. SingleStore will feature and demo these enhancements at a virtual launch event, [r]evolution 2022, tomorrow, July 13. Register and learn more about the event here.

    Tue, 12 Jul 2022 09:38:00 -0500 en-US text/html https://sdtimes.com/singlestore-announces-116m-financing-led-by-goldman-sachs-asset-management/
    C9010-262 exam dump and training guide direct download
    Training Exams List