People used these P2080-088 exam questions to get 100% marks

Assuming you are keen on effectively finishing the IBM P2080-088 test to help your career, killexams.com has easy route IBM Unica Enterprise Marketing Operations Technical Mastery Test v1 test questions that will guarantee you breeze through P2080-088 test. It conveys to you the most dependable, current, legitimate, and most recent refreshed P2080-088 Exam Questions, giving a 100 percent unconditional promise.

Exam Code: P2080-088 Practice exam 2022 by Killexams.com team
IBM Unica Enterprise Marketing Operations Technical Mastery Test v1
IBM Enterprise test prep
Killexams : IBM Enterprise test prep - BingNews https://killexams.com/pass4sure/exam-detail/P2080-088 Search results Killexams : IBM Enterprise test prep - BingNews https://killexams.com/pass4sure/exam-detail/P2080-088 https://killexams.com/exam_list/IBM Killexams : Cybersecurity - what’s the real cost? Ask IBM
(Pixabay)

Cybersecurity has always been a concern for every type of organization. Even in normal times, a major breach is more than just the data economy’s equivalent of a ram-raid on Fort Knox; it has knock-on effects on trust, reputation, confidence, and the viability of some technologies. This is what IBM calls the “haunting effect”.

A successful attack breeds more, of course, both on the same organization again, and on others in similar businesses, or in those that use the same compromised systems. The unspoken effect of this is rising costs for everyone, as all enterprises are forced to spend money and time on checking if they have been affected too.

But in our new world of COVID-19, disrupted economies, climate change, remote working, soaring inflation, and looming recession, all such effects are all amplified. Throw in a war that’s hammering on Europe’s door (with political echoes across the Middle East and Asia) and it’s a wonder any of us can get out of bed in the morning.

So, what are the real costs of a successful cyberattack – not just hacks, viruses, and Trojans, but also phishing, ransomware, and concerted campaigns against supply chains and code repositories?

According to IBM’s latest annual survey, breach costs have risen by an unlucky 13% over the past two years, as attackers, which include hostile states, have probed the systemic and operational weaknesses exposed by the pandemic.

The global average cost of a data breach has reached an all-time high of $4.35 million – at least, among the 550 organizations surveyed by the Ponemon Institute for IBM Security (over a year from March 2021). Indeed, IBM goes so far as to claim that breaches may be contributing to the rising costs of goods and services. The survey states:

Sixty percent of studied organizations raised their product or services prices due to the breach, when the cost of goods is already soaring worldwide amid inflation and supply chain issues.

Incidents are also “haunting” organizations, says the company, with 83% having experienced more than one data breach, and with 50% of costs occurring more than a year after the successful attack.

Cloud maturity is a key factor, adds the report:

Forty-three percent of studied organizations are in the early stages [of cloud adoption] or have not started applying security practices across their cloud environments, observing over $660,000 in higher breach costs, on average, than studied organizations with mature security across their cloud environments.

Forty-five percent of respondents run a hybrid cloud infrastructure. This leads to lower average breach costs than among those operating a public- or private-cloud model: $3.8 million versus $5.02 million (public) and $4.24 million (private).

That said, those are still significant costs, and may suggest that complexity is what deters attackers, rather than having a single target to hit. Nonetheless, hybrid cloud adopters are able to identify and contain data breaches 15 days faster on average, says the report.

However, with 277 days being the average time lag – an extraordinary figure – the real lesson may be that today’s enterprise systems are adept at hiding security breaches, which may appear as normal network traffic. Forty-five percent of breaches occurred in the cloud, says the report, so it is clearly imperative to get on top of security in that domain.

IBM then makes the following bold claim :

Participating organizations fully deploying security AI and automation incurred $3.05 million less on average in breach costs compared to studied organizations that have not deployed the technology – the biggest cost saver observed in the study.

Whether this finding will stand for long as attackers explore new ways to breach automated and/or AI-based systems – and perhaps automate attacks of their own invisibly – remains to be seen. Compromised digital employee, anyone?

Global systems at risk

But perhaps the most telling finding is that cybersecurity has a political dimension – beyond the obvious one of Russian, Chinese, North Korean, or Iranian state incursions, of course.

Concerns over critical infrastructure and global supply chains are rising, with threat actors seeking to disrupt global systems that include financial services, industrial, transportation, and healthcare companies, among others.

A year ago in the US, the Biden administration issued an Executive Order on cybersecurity that focused on the urgent need for zero-trust systems. Despite this, only 21% of critical infrastructure organizations have so far adopted a zero-trust security model, according to the report. It states:

Almost 80% of the critical infrastructure organizations studied don’t adopt zero-trust strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared to those that do. All while 28% of breaches among these organizations were ransomware or destructive attacks.

Add to that, 17% of breaches at critical infrastructure organizations were caused due to a business partner being initially compromised, highlighting the security risks that over-trusting environments pose.

That aside, one of the big stories over the past couple of years has been the rise of ransomware: malicious code that locks up data, enterprise systems, or individual computers, forcing users to pay a ransom to (they hope) retrieve their systems or data.

But according to IBM, there are no obvious winners or losers in this insidious practice. The report adds:

Businesses that paid threat actors’ ransom demands saw $610,000 less in average breach costs compared to those that chose not to pay – not including the ransom amount paid.

However, when accounting for the average ransom payment – which according to Sophos reached $812,000 in 2021 – businesses that opt to pay the ransom could net higher total costs, all while inadvertently funding future ransomware attacks.”

The persistence of ransomware is fuelled by what IBM calls the “industrialization of cybercrime”.

The risk profile is also changing. Ransomware attack times show a massive drop of 94% over the past three years, from over two months to just under four days. Good news? Not at all, says the report, as the attacks may be higher impact, with more immediate consequences (such as destroyed data, or private data being made public on hacker forums).

My take

The key lesson in cybersecurity today is that all of us are both upstream and downstream from partners, suppliers, and customers in today’s extended enterprises. We are also at the mercy of reused but compromised code from trusted repositories, and even sometimes from hardware that has been compromised at source.

So, what is the answer? Businesses should ensure that their incident responses are tested rigorously and frequently in advance – along with using red-, blue-, or purple-team approaches (thinking like a hacker, a defender, or both).

Regrettably, IBM says that 37% of organizations that have IR plans in place fail to test them regularly. To paraphrase Spinal Tap, you can’t code for stupid.

Wed, 27 Jul 2022 12:00:00 -0500 BRAINSUM en text/html https://diginomica.com/cybersecurity-whats-real-cost-ask-ibm
Killexams : Enterprise innovation: Low Code/No Code democratizes IT Low-Code, No Code (LCNCs) are being used by businesses today to generate value and stimulate innovation across many industries. Enterprises can supply new capabilities quickly and simply on demand without needing to depend on their IT teams. Software development environments make it possible for people with little or no professional coding knowledge to design and change programs. The platform will be used more frequently, according to 60% of low-code/no-code users.

Businesses are increasingly depending on cutting-edge solutions like low-code/no-code (LCNC) platforms because they want to build apps quickly as they embark on their digital transformation journeys. These platforms, which demand a minimal level of technical expertise, are rapidly gaining popularity among businesses in a variety of industries that want to easily and quickly build their own apps. “This trend has also given birth to ‘citizen developers’ which has been instrumental for many organizations to bridge their IT skills gap.”, observes P Saravanan, Vice-President, Cloud Engineering, Oracle India.

Factors driving the adoption of LCNCs

“Rapid Automation and shortage of talented/skilled developers are the key factors driving LCNC. The latest pandemic also has pushed all the companies towards digital transformation with greater speed”, says Mitesh Shah, Vice President, SAP BTP Application Core Products & Services.

The growing need for businesses to respond with agility and speed to changing market dynamics has led to an increased adoption of LCNC approach. Project timelines come down from months to days leading to faster product rollouts. “LCNC approach involves smaller teams, fewer resources, lower infrastructure or low maintenance costs, and better ROI with faster agile releases making it more cost-effective than from-scratch development”, Vishal Chahal, Director IBM Automation, IBM India Software Labs adds.

The current macroeconomic climate has tightened financial constraints for enterprises everywhere. Companies are therefore seeking application development methods that are affordable, which LCNC provides.

The post-pandemic scenario and the requirement for organisations to develop resilience have sped up the adoption of technology; this has led to what we also refer to as compressed transformation—the simultaneous transformation of several organisational components.

Then, there is the demand for agility and experimentation skills as firms engage in rapid transformation and create cutting-edge apps to support their company and workforce development agenda. LCNC has never before seen agility in the development of contemporary multi-channel experiences. “It also helps organizations address the talent gap as skilled technology talent is becoming harder to find, and LCNC developers can help organizations tap into diversified talent that brings business expertise”, Raghavan Iyer, Senior Managing Director, Innovation Lead - Integrated Global Services, Accenture Technology opines.

Accelerating enterprise innovation

LCNCs are designed to harness the power of the cloud and data in order to let business users create applications that provide unique innovations to transform operations, experiences, and deliver operational efficiencies and insights. . The inclusion of industry accelerators and interfaces with the digital core in LCNC platforms creates a myriad of opportunities for applying data to innovative and disruptive applications. One of LCNC's main advantages is that it recruits those who are most ideally situated to effect change. “Citizen developers can closely collaborate with professional developers and IT experts to create enterprise class applications to experiment and develop applications”, Iyer adds.

According to a Gartner estimate, 70 percent of new apps would be developed by market participants using low-code and no-code platforms by 2025. Programming expertise may not be as crucial in the future as LCNC technologies automate the process of creating new apps. “This will eventually free up developers to focus on the development for niche areas”, Shah explains. Nowadays, rather than being predominantly driven by technology professionals, enterprise innovation focuses on boosting customer experiences, increasing efficiency, and improving business processes. Adoption of the LCNC platform and technologies enables participation in the innovation process from a variety of workforce segments, particularly those with domain expertise.

Bridging the IT skills gap

With the help of LCNC, businesses can stop relying on IT teams to implement and develop new solutions, and business users are given the tools they need to become change agents. Professional developers can concentrate on more intricate, inventive, and feature-rich innovations by using low code approaches that automate the fundamental routines. No Code enables business users (or citizen developers) to investigate and test out novel solutions despite having little to no coding experience.

Enterprises now want every bit of talent and expertise they can acquire to meet the demands of the rapidly changing business environment. The LCNC approach's citizen developers assist firms in addressing the talent shortage, employee attrition, and skill gaps.

Capabilities of organizations

IBM has built LCNC capabilities in its platforms for an end to end coverage from development and deployment to the management of solutions. “IBM Automation platforms provide AI-driven capability to manage and automate both IT systems and business processes through the LCNC approach. Using technology like Turbonomics and Instana along with Watson AIOps, users are able to automate the observability, optimization, and remediation of their hybrid cloud solutions with low to no coding requirements, monitor their IT systems while getting AI-driven actions for reducing cost and performing dynamic optimization to upscale or downscale their systems with no coding and minimal IT support”, remarked Vishal.

Oracle’s primary offering, Oracle APEX, a low code platform, is accepted for enterprise apps across the world. Saravanan adds “APEX provides users to build enterprise apps 20x faster and with 100x less code. Businesses are also becoming aware of the value of LCNC in India.”.

At Accenture, there are large communities of practitioners on LCNC cutting across hyperscalers, core platforms and pureplay development platforms.“We have built a global practice of LCNC that creates thousands of applications for ourselves and our clients.”, says Iyer.

SAP Labs India is developing the core services behind the LCNC products of SAP. “LCNC core services are providing the unification across the various LCNC offerings of SAP. Additionally in the area of Process Automation, Labs India teams are playing a significant role”, Shah states .

With the increasing move to the LCNC approach , technology is now more readily available to all employees inside the company, improving communication between IT and business divisions and allowing for the development of solutions that are more suited to corporate requirements. Adoption of such platforms can also aid in bridging the skill shortage in the IT sector as it enables businesses to tap into talent pools outside of their usual boundaries.

Tue, 19 Jul 2022 21:07:00 -0500 en text/html https://cio.economictimes.indiatimes.com/news/next-gen-technologies/enterprise-innovation-low-code/no-code-democratizes-it/92992994
Killexams : Navigating the Ins and Outs of a Microservice Architecture (MSA)

Key takeaways

  • MSA is not a completely new concept, it is about doing SOA correct by utilizing modern technology advancements.
  • Microservices only address a small portion of the bigger picture - architects need to look at MSA as an architecture practice and implement it to make it enterprise-ready.
  • Micro is not only about the size, it is primarily about the scope.
  • Integration is a key aspect of MSA that can implement as micro-integrations when applicable.
  • An iterative approach helps an organization to move from its current state to a complete MSA.

Enterprises today contain a mix of services, legacy applications, and data, which are topped by a range of consumer channels, including desktop, web and mobile applications. But too often, there is a disconnect due to the absence of a properly created and systematically governed integration layer, which is required to enable business functions via these consumer channels. The majority of enterprises are battling this challenge by implementing a service-oriented architecture (SOA) where application components provide loosely-coupled services to other components via a communication protocol over a network. Eventually, the intention is to embrace a microservice architecture (MSA) to be more agile and scalable. While not fully ready to adopt an MSA just yet, these organizations are architecting and implementing enterprise application and service platforms that will enable them to progressively move toward an MSA.

In fact, Gartner predicts that by 2017 over 20% of large organizations will deploy self-contained microservices to increase agility and scalability, and it's happening already. MSA is increasingly becoming an important way to deliver efficient functionality. It serves to untangle the complications that arise with the creation services; incorporation of legacy applications and databases; and development of web apps, mobile apps, or any consumer-based applications.

Today, enterprises are moving toward a clean SOA and embracing the concept of an MSA within a SOA. Possibly the biggest draws are the componentization and single function offered by these microservices that make it possible to deploy the component rapidly as well as scale it as needed. It isn't a novel concept though.

For instance, in 2011, a service platform in the healthcare space started a new strategy where whenever it wrote a new service, it would spin up a new application server to support the service deployment. So, it's a practice that came from the DevOps side that created an environment with less dependencies between services and ensured a minimum impact to the rest of the systems in the event of some sort of maintenance. As a result, the services were running over 80 servers. It was, in fact, very basic since there were no proper DevOps tools available as there are today; instead, they were using Shell scripts and Maven-type tools to build servers.

While microservices are important, it's just one aspect of the bigger picture. It's clear that an organization cannot leverage the full benefits of microservices on their own. The inclusion of MSA and incorporation of best practices when designing microservices is key to building an environment that fosters innovation and enables the rapid creation of business capabilities. That's the real value add.

Addressing Implementation Challenges

The generally accepted practice when building your MSA is to focus on how you would scope out a service that provides a single-function rather than the size. The inner architecture typically addresses the implementation of the microservices themselves. The outer architecture covers the platform capabilities that are required to ensure connectivity, flexibility and scalability when developing and deploying your microservices. To this end, enterprise middleware plays a key role when crafting both your inner and outer architectures of the MSA.

First, middleware technology should be DevOps-friendly, contain high-performance functionality, and support key service standards. Moreover, it must support a few design fundamentals, such as an iterative architecture, and be easily pluggable, which in turn will provide rapid application development with continuous release. On top of these, a comprehensive data analytics layer is critical for supporting a design for failure.

The biggest mistake enterprises often make when implementing an MSA is to completely throw away established SOA approaches and replace them with the theory behind microservices. This results in an incomplete architecture and introduces redundancies. The smarter approach is to consider an MSA as a layered system that includes an enterprise service bus (ESB) like functionality to handle all integration-related functions. This will also act as a mediation layer that enables changes to occur at this level, which can then be applied to all relevant microservices. In other words, an ESB or similar mediation engine enables a gradual move toward an MSA by providing the required connectivity to merge legacy data and services into microservices. This approach is also important for incorporating some fundamental rules by launching the microservice first and then exposing it via an API.

Scoping Out and Designing the 'Inner Architecture'

Significantly, the inner architecture needs to be simple, so it can be easily and independently deployable and independently disposable. Disposability is required in the event that the microservice fails or a better service emerges; in either case, there is a requirement for the respective microservice to be easily disposed. The microservice also needs to be well supported by the deployment architecture and the operational environment in which the microservice is built, deployed, and executed. Therefore, it needs to be simple enough to be independently deployable. An ideal example of this would be releasing a new version of the same service to introduce bug fixes, include new features or enhancements to existing features, and to remove deprecated services.

The key requirements of an MSA inner architecture are determined by the framework on which the MSA is built. Throughput, latency, and low resource usage (memory and CPU cycles) are among the key requirements that need to be taken into consideration. A good microservice framework typically will build on lightweight, fast runtime, and modern programming models, such as an annotated meta-configuration that's independent from the core business logic. Additionally, it should offer the ability to secure microservices using desired industry leading security standards, as well as some metrics to monitor the behavior of microservices.

With the inner architecture, the implementation of each microservice is relatively simple compared to the outer architecture. A good service design will ensure that six factors have been considered when scoping out and designing the inner architecture:

First, the microservice should have a single purpose and single responsibility, and the service itself should be delivered as a self-contained unit of deployment that can create multiple instances at the runtime for scale.

Second, the microservice should have the ability to adopt an architecture that's best suited for the capabilities it delivers and one that uses the appropriate technology.

Third, once the monolithic services are broken down into microservices, each microservice or set of microservices should have the ability to be exposed as APIs. However, within the internal implementation, the service could adopt any suitable technology to deliver that respective business capability by implementing the business requirement. To do this, the enterprise may want to consider something like Swagger to define the API specification or API definition of a particular microservice, and the microservice can use this as the point of interaction. This is referred to as an API-first approach in microservice development.

Fourth, with units of deployment, there may be options, such as self-contained deployable artifacts bundled in hypervisor-based images, or container images, which are generally the more popular option.

Fifth, the enterprise needs to leverage analytics to refine the microservice, as well as to provision for recovery in the event the service fails. To this end, the enterprise can incorporate the use of metrics and monitoring to support this evolutionary aspect of the microservice.

Sixth, even though the microservice paradigm itself enables the enterprise to have multiple or polyglot implementations for its microservices, the use of best practices and standards is essential for maintaining consistency and ensuring that the solution follows common enterprise architecture principles. This is not to say that polyglot opportunities should not be completely vetoed; rather they need to be governed when used.

Addressing Platform Capabilities with the 'Outer Architecture'

Once the inner architecture has been set up, architects need to focus on the functionality that makes up the outer architecture of their MSA. A key component of the outer architecture is the introduction of an enterprise service bus (ESB) or similar mediation engine that will aide with the connecting legacy data and services into MSA. A mediation layer will also enable the enterprise to maintain its own standards while others in the ecosystem manage theirs.

The use of a service registry will support dependency management, impact analysis, and discovery of the microservices and APIs. It also will enable streamlining of service/API composition and wire microservices into a service broker or hub. Any MSA should also support the creation of RESTful APIs that will help the enterprise to customize resource models and application logic when developing apps.

By sticking to the basics of designing the API first, implementing the microservice, and then exposing it via the API, the API rather than the microservice becomes consumable. Another common requirement enterprises need to address is securing microservices. In a typical monolithic application, an enterprise would use an underlying repository or user store to populate the required information from the security layer of the old architecture. In an MSA, an enterprise can leverage widely-adopted API security standards, such as OAuth2 and OpenID Connect, to implement a security layer for edge components, including APIs within the MSA.

On top of all these capabilities, what really helps to untangle MSA complexities is the use of an underlying enterprise-class platform that provides rich functionality while managing scalability, availability, and performance. That is because the breaking down of a monolithic application into microservices doesn't necessarily amount to a simplified environment or service. To be sure, at the application level, an enterprise essentially is dealing with several microservices that are far more simple than a single monolithic, complicated application. Yet, the architecture as a whole may not necessarily be less arduous.

In fact, the complexity of an MSA can be even greater given the need to consider the other aspects that come into play when microservices need to talk to each other versus simply making a direct call within a single process. What this essentially means is that the complexity of the system moves to what is referred to as the "outer architecture", which typically consists of an API gateway, service routing, discovery, message channel, and dependency management.

With the inner architecture now extremely simplified--containing only the foundation and execution runtime that would be used to build a microservice--architects will find that the MSA now has a clean services layer. More focus then needs to be directed toward the outer architecture to address the prevailing complexities that have arisen. There are some common pragmatic scenarios that need to be addressed as explained in the diagram below.

The outer architecture will require an API gateway to help it expose business APIs internally and externally. Typically, an API management platform will be used for this aspect of the outer architecture. This is essential for exposing MSA-based services to consumers who are building end-user applications, such as web apps, mobile apps, and IoT solutions.

Once the microservices are in place, there will be some sort of service routing that takes place in which the request that comes via APIs will be routed to the relevant service cluster or service pod. Within microservices themselves, there will be multiple instances to scale based on the load. Therefore, there's a requirement to carry out some form of load balancing as well.

Additionally, there will be dependencies between microservices--for instance, if microservice A has a dependency on microservice B, it will need to invoke microservice B at runtime. A service registry addresses this need by enabling services to discover the endpoints. The service registry will also manage the API and service dependencies as well as other assets, including policies.

Next, the MSA outer architecture needs some messaging channels, which essentially form the layer that enables interactions within services and links the MSA to the legacy world. In addition, this layer helps to build a communication (micro-integration) channel between microservices, and these channels should be lightweight protocols, such as HTTP, MQTT, among others.

When microservices talk to each other, there needs to be some form of authentication and authorization. With monolithic apps, this wasn't necessary because there was a direct in-process call. By contrast, with microservices, these translate to network calls. Finally, diagnostics and monitoring are key aspects that need to be considered to figure out the load type handled by each microservice. This will help the enterprise to scale up microservices separately.

Reviewing MSA Scenarios

To put things into perspective, let's analyze some genuine scenarios that demonstrate how the inner and outer architecture of an MSA work together. We'll assume an organization has implemented its services using Microsoft Windows Communication Foundation or the Java JEE/J2EE service framework, and developers there are writing new services using a new microservices framework by applying the fundamentals of MSA.

In such a case, the existing services that expose the data and business functionality cannot be ignored. As a result, new microservices will need to communicate with the existing service platforms. In most cases, these existing services will use the standards adhered to by the framework. For instance, old services might use service bindings, such as SOAP over HTTP, Java Message Service (JMS) or IBM MQ, and secured using Kerberos or WS-Security. In this example, messaging channels too will play a big role in protocol conversions, message mediation, and security bridging from the old world to the new MSA.

Another aspect the organization would need to consider is any impact to its scalability efforts in terms of business growth given the prevalent limitations posed by a monolithic application, whereas an MSA is horizontally scalable. Among some obvious limitations are possible errors as it's cumbersome to test new features in a monolithic environment and delays to implement these changes, hampering the need to meet immediate requirements. Another challenge would be supporting this monolithic code base given the absence of a clear owner; in the case of microservices, individual or single functions can be managed on their own and each of these can be expanded as required quickly without impacting other functions.

In conclusion, while microservices offer significant benefits to an organization, adopting an MSA in a phased out or iterative manner may be the best way to move forward to ensure a smooth transition. Key aspects that make MSA the preferred service-oriented approach is clear ownership and the fact that it fosters failure isolation, thereby enabling these owners to make services within their domains more stable and efficient.

About the Author

Asanka Abeysinghe is vice president of solutions architecture at WSO2. He has over 15 years of industry experience, which include implementing projects ranging from desktop and web applications through to highly scalable distributed systems and SOAs in the financial domain, mobile platforms, and business integration solutions. His areas of specialization include application architecture and development using Java technologies, C/C++ on Linux and Windows platforms. He is also a committer of the Apache Software Foundation.

Mon, 26 Dec 2016 20:00:00 -0600 en text/html https://www.infoq.com/articles/navigating-microservices-architecture/
Killexams : Java Development Definitions
  • A

    abstract class

    In Java and other object oriented programming (OOP) languages, objects and classes (categories of objects) may be abstracted, which means that they are summarized into characteristics that are relevant to the current program’s operation.

  • AJAX (Asynchronous JavaScript and XML)

    AJAX (Asynchronous JavaScript and XML) is a technique aimed at creating better and faster interactive web apps by combining several programming tools including JavaScript, dynamic HTML (DHTML) and Extensible Markup Language (XML).

  • Apache Camel

    Apache Camel is a Java-based framework that implements messaging patterns in Enterprise Integration Patterns (EIP) to provide a rule-based routing and mediation engine enterprise application integration (EAI).

  • Apache Solr

    Apache Solr is an open source search platform built upon a Java library called Lucene.

  • AWS SDK for Java

    The AWS SDK for Java is a collection of tools for developers creating Java-based Web apps to run on Amazon cloud components such as Amazon Simple Storage Service (S3), Amazon Elastic Compute Cloud (EC2) and Amazon SimpleDB.

  • AWS SDK for JavaScript

    The AWS SDK for JavaScript is a collection of software tools for the creation of applications and libraries that use Amazon Web Services (AWS) resources.

  • B

    bitwise operator

    Because they allow greater precision and require fewer resources, bitwise operators, which manipulate individual bits, can make some code faster and more efficient. Applications of bitwise operations include encryption, compression, graphics, communications over ports/sockets, embedded systems programming and finite state machines.

  • C

    compositing

    Compositing used to create layered images and video in advertisements, memes and other content for print publications, websites and apps. Compositing techniques are also used in video game development, augmented reality and virtual reality.

  • const

    Const (constant) in programming is a keyword that defines a variable or pointer as unchangeable.

  • CSS (cascading style sheets)

    This definition explains the meaning of cascading style sheets (CSS) and how using them with HTML pages is a user interface (UI) development best practice that complies with the separation of concerns design pattern.

  • E

    embedded Tomcat

    An embedded Tomcat server consists of a single Java web application along with a full Tomcat server distribution, packaged together and compressed into a single JAR, WAR or ZIP file.

  • EmbeddedJava

    EmbeddedJava is Sun Microsystems' software development platform for dedicated-purpose devices with embedded systems, such as products designed for the automotive, telecommunication, and industrial device markets.

  • encapsulation in Java

    Java offers four different "scope" realms--public, protected, private, and package--that can be used to selectively hide data constructs. To achieve encapsulation, the programmer declares the class variables as “private” and then provides what are called public “setter and getter” methods which make it possible to view and modify the variables.

  • Enterprise JavaBeans (EJB)

    Enterprise JavaBeans (EJB) is an architecture for setting up program components, written in the Java programming language, that run in the server parts of a computer network that uses the client/server model.

  • exception handler

    In Java, checked exceptions are found when the code is compiled; for the most part, the program should be able to recover from these. Exception handlers are coded to define what the program should do under specified conditions.

  • F

    full-stack developer

    A full-stack developer is a type of programmer that has a functional knowledge of all techniques, languages and systems engineering concepts required in software development.

  • G

    git stash

    Git stash is a built-in command with the distributed version control tool in Git that locally stores all the most latest changes in a workspace and resets the state of the workspace to the prior commit state.

  • GraalVM

    GraalVM is a tool for developers to write and execute Java code.

  • Groovy

    Groovy is a dynamic object-oriented programming language for the Java virtual machine (JVM) that can be used anywhere Java is used.

  • GWT (GWT Web Toolkit)

    The GWT software development kit facilitates the creation of complex browser-based Java applications that can be deployed as JavaScript, for portability across browsers, devices and platforms.

  • H

    Hibernate

    Hibernate is an open source object relational mapping (ORM) tool that provides a framework to map object-oriented domain models to relational databases for web applications.

  • HTML (Hypertext Markup Language)

    HTML (Hypertext Markup Language) is a text-based approach to describing how content contained within an HTML file is structured.

  • I

    InstallAnywhere

    InstallAnywhere is a program that can used by software developers to package a product written in Java so that it can be installed on any major operating system.

  • IntellJ IDEA

    The free and open source IntellJ IDEA includes JUnit and TestNG, code inspections, code completion, support for multiple refactoring, Maven and Ant build tools, a visual GUI (graphical user interface) builder and a code editor for XML as well as Java. The commercial version, Ultimate Edition, provides more features.

  • inversion of control (IoC)

    Inversion of control, also known as the Hollywood Principle, changes the control flow of an application and allows developers to sidestep some typical configuration hassles.

  • J

    J2ME (Java 2 Platform, Micro Edition)

    J2ME (Java 2 Platform, Micro Edition) is a technology that allows programmers to use the Java programming language and related tools to develop programs for mobile wireless information devices such as cellular phones and personal digital assistants (PDAs).

  • JAR file (Java Archive)

    A Java Archive, or JAR file, contains all of the various components that make up a self-contained, executable Java application, deployable Java applet or, most commonly, a Java library to which any Java Runtime Environment can link.

  • Java

    Java is a widely used programming language expressly designed for use in the distributed environment of the internet.

  • Java abstract class

    In Java and other object oriented programming (OOP) languages, objects and classes may be abstracted, which means that they are summarized into characteristics that are relevant to the current program’s operation.

  • Java annotations

    Within the Java development kit (JDK), there are simple annotations used to make comments on code, as well as meta-annotations that can be used to create annotations within annotation-type declarations.

  • Java assert

    The Java assert is a mechanism used primarily in nonproduction environments to test for extraordinary conditions that will never be encountered unless a bug exists somewhere in the code.

  • Java Authentication and Authorization Service (JAAS)

    The Java Authentication and Authorization Service (JAAS) is a set of application program interfaces (APIs) that can determine the identity of a user or computer attempting to run Java code, and ensure that the entity has the privilege or permission to execute the functions requested... (Continued)

  • Java BufferedReader

    Java BufferedReader is a public Java class that allows large volumes to be read from disk and copied to much faster RAM to increase performance over the multiple network communications or disk reads done with each read command otherwise

  • Java Business Integration (JBI)

    Java Business Integration (JBI) is a specification that defines an approach to implementing a service-oriented architecture (SOA), the underlying structure supporting Web service communications on behalf of computing entities such as application programs or human users... (Continued)

  • Java Card

    Java Card is an open standard from Sun Microsystems for a smart card development platform.

  • Java Champion

    The Java Champion designation is awarded to leaders and visionaries in the Java technology community.

  • Java chip

    The Java chip is a microchip that, when included in or added to a computer, will accelerate the performance of Java programs (including the applets that are sometimes included with Web pages).

  • Java Comparator

    Java Comparator can compare objects to return an integer based on a positive, equal or negative comparison. Since it is not limited to comparing numbers, Java Comparator can be set up to order lists alphabetically or numerically.

  • Java compiler

    Generally, Java compilers are run and pointed to a programmer’s code in a text file to produce a class file for use by the Java virtual machine (JVM) on different platforms. Jikes, for example, is an open source compiler that works in this way.

  • Java Cryptography Extension (JCE)

    The Java Cryptography Extension (JCE) is an application program interface (API) that provides a uniform framework for the implementation of security features in Java.

  • Java Data Objects (JDO)

    Java Data Objects (JDO) is an application program interface (API) that enables a Java programmer to access a database implicitly - that is, without having to make explicit Structured Query Language (SQL) statements.

  • Java Database Connectivity (JDBC)

    Java Database Connectivity (JDBC) is an API packaged with the Java SE edition that makes it possible to connect from a Java Runtime Environment (JRE) to external, relational database systems.

  • Java Development Kit (JDK)

    The Java Development Kit (JDK) provides the foundation upon which all applications that are targeted toward the Java platform are built.

  • Java Flight Recorder

    Java Flight Recorder is a Java Virtual Machine (JVM) profiler that gathers performance metrics without placing a significant load on resources.

  • Java Foundation Classes (JFC)

    Using the Java programming language, Java Foundation Classes (JFC) are pre-written code in the form of class libraries (coded routines) that deliver the programmer a comprehensive set of graphical user interface (GUI) routines to use.

  • Java IDE

    Java IDEs typically provide language-specific features in addition to the code editor, compiler and debugger generally found in all IDEs. Those elements may include Ant and Maven build tools and TestNG and JUnit testing.

  • Java keyword

    Java keywords are terms that have special meaning in Java programming and cannot be used as identifiers for variables, classes or other elements within a Java program.

  • Java Message Service (JMS)

    Java Message Service (JMS) is an application program interface (API) from Sun Microsystems that supports the formal communication known as messaging between computers in a network.

  • Java Mission Control

    Java Mission Control is a performance-analysis tool that renders sampled JVM metrics in easy-to-understand graphs, tables, histograms, lists and charts.

  • Java Platform, Enterprise Edition (Java EE)

    The Java Platform, Enterprise Edition (Java EE) is a collection of Java APIs owned by Oracle that software developers can use to write server-side applications. It was formerly known as Java 2 Platform, Enterprise Edition, or J2EE.

  • Java Runtime Environment (JRE)

    The Java Runtime Environment (JRE), also known as Java Runtime, is the part of the Java Development Kit (JDK) that contains and orchestrates the set of tools and minimum requirements for executing a Java application.

  • Java Server Page (JSP)

    Java Server Page (JSP) is a technology for controlling the content or appearance of Web pages through the use of servlets, small programs that are specified in the Web page and run on the Web server to modify the Web page before it is sent to the user who requested it.

  • Java string

    Strings, in Java, are immutable sequences of Unicode characters. Strings are objects in Java and the string class enables their creation and manipulation.

  • Java virtual machine (JVM)

    A Java virtual machine (JVM), an implementation of the Java Virtual Machine Specification, interprets compiled Java binary code (called bytecode) for a computer's processor (or "hardware platform") so that it can perform a Java program's instructions.

  • JAVA_HOME

    JAVA_HOME is an operating system (OS) environment variable which can optionally be set after either the Java Development Kit (JDK) or the Java Runtime Environment (JRE) is installed.

  • JavaBeans

    JavaBeans is an object-oriented programming interface from Sun Microsystems that lets you build re-useable applications or program building blocks called components that can be deployed in a network on any major operating system platform.

  • JavaFX

    JavaFX is a software development platform for the creation of both desktop aps and rich internet applications (RIAs) that can run on various devices. The name is a short way of typing "Java Effects."

  • JavaScript

    JavaScript is a programming language that started off simply as a mechanism to add logic and interactivity to an otherwise static Netscape browser.

  • JAX-WS (Java API for XML Web Services)

    Java API for XML Web Services (JAX-WS) is one of a set of Java technologies used to develop Web services... (Continued)

  • JBoss

    JBoss is a division of Red Hat that provides support for the JBoss open source application server program and related middleware services marketed under the JBoss Enterprise Middleware brand.

  • JDBC Connector (Java Database Connectivity Connector)

    The JDBC (Java Database Connectivity) Connector is a program that enables various databases to be accessed by Java application servers that are run on the Java 2 Platform, Enterprise Edition (J2EE) from Sun Microsystems.

  • JDBC driver

    A JDBC driver (Java Database Connectivity driver) is a small piece of software that allows JDBC to connect to different databases. Once loaded, a JDBC driver connects to a database by providing a specifically formatted URL that includes the port number, the machine and database names.

  • JHTML (Java within Hypertext Markup Language)

    JHTML (Java within Hypertext Markup Language) is a standard for including a Java program as part of a Web page (a page written using the Hypertext Markup Language, or HTML).

  • Jikes

    Jikes is an open source Java compiler from IBM that adheres strictly to the Java specification and promises an "extremely fast" compilation.

  • JMX (Java Management Extensions)

    JMX (Java Management Extensions) is a set of specifications for application and network management in the J2EE development and application environment.

  • JNDI (Java Naming and Directory Interface)

    JNDI (Java Naming and Directory Interface) enables Java platform-based applications to access multiple naming and directory services.

  • JOLAP (Java Online Analytical Processing)

    JOLAP (Java Online Analytical Processing) is a Java application-programming interface (API) for the Java 2 Platform, Enterprise Edition (J2EE) environment that supports the creation, storage, access, and management of data in an online analytical processing (OLAP) application.

  • jQuery

    jQuery is an open-sourced JavaScript library that simplifies creation and navigation of web applications.

  • JRun

    JRun is an application server from Macromedia that is based on Sun Microsystems' Java 2 Platform, Enterprise Edition (J2EE).

  • JSON (Javascript Object Notation)

    JSON (JS Object Notation) is a text-based, human-readable data interchange format used for representing simple data structures and objects in Web browser-based code. JSON is also sometimes used in desktop and server-side programming environments. (Continued....)

  • JTAPI (Java Telephony Application Programming Interface)

    JTAPI (Java Telephony Application Programming Interface) is a Java-based application programming interface (API) for computer telephony applications.

  • just-in-time compiler (JIT)

    A just-in-time (JIT) compiler is a program that turns bytecode into instructions that can be sent directly to a computer's processor (CPU).

  • Jython

    Jython is an open source implementation of the Python programming language, integrated with the Java platform.

  • K

    Kebab case

    Kebab case -- or kebab-case -- is a programming variable naming convention where a developer replaces the spaces between words with a dash.

  • M

    MBean (managed bean)

    In the Java programming language, an MBean (managed bean) is a Java object that represents a manageable resource, such as an application, a service, a component, or a device.

  • Morphis

    Morphis is a Java -based open source wireless transcoding platform from Kargo, Inc.

  • N

    NetBeans

    NetBeans is a Java-based integrated development environment (IDE). The term also refers to the IDE’s underlying application platform framework. 

  • O

    object-relational mapping (ORM)

    Object-relational mapping (ORM) is a mechanism that makes it possible to address, access and manipulate objects without having to consider how those objects relate to their data sources...(Continued)

  • Open Service Gateway Initiative (OSGi)

    OSGi (Open Service Gateway Initiative) is an industry plan for a standard way to connect devices such as home appliances and security systems to the Internet.

  • OpenJDK

    OpenJDK is a free, open-source version of the Java Development Kit for the Java Platform, Standard Edition (Java SE).

  • P

    Pascal case

    Pascal case is a naming convention in which developers start each new word in a variable with an uppercase letter.

  • prettyprint

    Prettyprint is the process of converting and presenting source code or other objects in a legible and attractive way.

  • R

    Remote Method Invocation (RMI)

    RMI (Remote Method Invocation) is a way that a programmer, using the Java programming language and development environment, can write object-oriented programming in which objects on different computers can interact in a distributed network.

  • S

    Snake case

    Snake case is a naming convention where a developer replaces spaces between words with an underscore.

  • SQLJ

    SQLJ is a set of programming extensions that allow a programmer using the Java programming language to embed statements that provide SQL (Structured Query Language) database requests.

  • Sun Microsystems

    Sun Microsystems (often just called "Sun"), the leading company in computers used as Web servers, also makes servers designed for use as engineering workstations, data storage products, and related software.

  • T

    Tomcat

    Tomcat is an application server from the Apache Software Foundation that executes Java servlets and renders Web pages that include Java Server Page coding.

  • X

    XAML (Extensible Application Markup Language)

    XAML, Extensible Application Markup Language, is Microsoft's XML-based language for creating a rich GUI, or graphical user interface. XAML supports both vector and bitmap types of graphics, as well as rich text and multimedia files.

  • Wed, 13 Jul 2022 16:56:00 -0500 en text/html https://www.theserverside.com/definitions
    Killexams : Hyperledger Fabric

    What Is Hyperledger Fabric?

    Hyperledger Fabric is a modular blockchain framework that acts as a foundation for developing blockchain-based products, solutions, and applications using plug-and-play components that are aimed for use within private enterprises.

    Key Takeaways

    • Hyperledger is an enterprise-grade, open-source distributed ledger framework launched by the Linux Foundation in December 2016.
    • Fabric is a highly-modular, decentralized ledger technology (DLT) platform that was designed by IBM for industrial enterprise use.
    • Because Hyperledger Fabric is private and requires permission to access, businesses can segregate information (like prices), plus transactions can be sped up because the number of nodes on the network is reduced.
    • Fabric 2.0 was released in January 2020. The main features of this version are faster transactions, updated smart contract technology, and streamlined data sharing.

    Hyperledger Fabric was initiated by Digital Asset and IBM and has now emerged as a collaborative cross-industry venture, which is currently being hosted by the Linux Foundation. Among the several Hyperledger projects, Fabric was the first one to exit the “incubation” stage and achieve the “active” stage in March 2017.

    How Hyperledger Fabric Works

    Traditional blockchain networks can’t support private transactions and confidential contracts that are of utmost importance for businesses. Hyperledger Fabric was designed in response to this as a modular, scalable and secure foundation for offering industrial blockchain solutions.

    Hyperledger Fabric is the open-source engine for blockchain and takes care of the most important features for evaluating and using blockchain for business use cases.

    Within private industrial networks, the verifiable identity of a participant is a primary requirement. Hyperledger Fabric supports memberships based on permission; all network participants must have known identities. Many business sectors, such as healthcare and finance, are bound by data protection regulations that mandate maintaining data about the various participants and their respective access to various data points. Fabric supports such permission-based membership.

    Modular Architecture

    The modular architecture of Hyperledger Fabric separates the transaction processing workflow into three different stages: smart contracts called chaincode that comprise the distributed logic processing and agreement of the system, transaction ordering, and transaction validation and commitment. This segregation offers multiple benefits:

    • A reduced number of trust levels and verification that keeps the network and processing clutter-free
    • Improved network scalability
    • Better overall performance

    Additionally, Hyperledger Fabric’s support for plug-and-play of various components allows for easy reuse of existing features and ready-made integration of various modules. For instance, if a function already exists that verifies the participant’s identity, an enterprise-level network simply needs to plug and reuse this existing module instead of building the same function from scratch.

    The participants on the network have three distinct roles:

    • Endorser
    • Committer
    • Consenter

    In a nutshell, the transaction proposal is submitted to the endorser peer according to the predefined endorsement policy about the number of endorsers required. After sufficient endorsements by the endorser(s), a batch or block of transactions is delivered to the committer(s). Committers validate that the endorsement policy was followed and that there are no conflicting transactions. Once both the checks are made, the transactions are committed to the ledger.

    Image source: IBM

    Since only confirming instructions—such as signatures and read/write set—are sent across the network, the scalability and performance of the network is enhanced. Only endorsers and committers have access to the transaction, and security is improved with a fewer number of participants having access to key data points.

    Example of Hyperledger Fabric

    Suppose there's a manufacturer that wants to ship chocolates to a specific retailer or market of retailers (i.e., all US retailers) at a specific price but does not want to reveal that price in other markets (i.e., Chinese retailers).

    Since the movement of the product may involve other parties, like customs, a shipping company, and a financing bank, the private price may be revealed to all involved parties if a basic version of blockchain technology is used to support this transaction.

    Hyperledger Fabric addresses this issue by keeping private transactions private on the network; only participants who need to know are aware of the necessary details. Data partitioning on the blockchain allows specific data points to be accessible only to the parties who need to know.

    Criticism of Hyperledger Fabric

    The high-water mark of crypto-enthusiasm broke in 2018 after the collapse of the price of bitcoin (which hit its peak on Dec. 17, 2017). Overoptimistic claims about the value of the new technology were replaced with skepticism, and related technologies, including Hyperledger, also suffered from this skepticism.

    Hyperledger Fabric's Competitors

    Hyperledger Fabric competes with other Hyperledger projects like Iroha, Indy, and Sawtooth. It also competes with R3's Corda, which is also a private, permission-based DLT.

    Blockchain service firm Chainstack published a paper in January 2020 that shows development in Corda has been historically higher than development in Fabric, though Fabric development passed Corda's in Q3 2019 when Fabric switched to GitHub.

    The Chainstack report shows that while there are three times as many developers working on Fabric, Corda developers made more than two times as many code contributions, and Fabric developers push far less code per developer than Corda's developers.

    Hyperledger Fabric Is Not Blockchain and Is Not Efficient

    Several critiques of Hyperledger Fabric point out that a permission-based, private blockchain with Hyperledger Fabric's features is not a blockchain, and current non-blockchain technologies are far less expensive and deliver the same amount of security. Cointelegraph's Stuart Popejoy put the case like this:

    Fabric’s architecture is far more complex than any blockchain platform while also being less secure against tampering and attacks. You would think that a “private” blockchain would at least offer scalability and performance, but Fabric fails here as well. Simply put, pilots built on Fabric will face a complex and insecure deployment that won’t be able to scale with their businesses.

    Hyperledger Fabric has also been critiqued for lacking resiliency. A team of researchers from the Sorbonne in Paris and CSIRO - Data61, Australia's national science agency, found that significant network delays reduced the reliability of Fabric: "[B]y delaying block propagation, we demonstrated that Hyperledger Fabric does not provide sufficient consistency guarantees to be deployed in critical environments."

    Hyperledger Fabric 2.0 Released in January 2020

    In January of 2020, Hyperledger Fabric 2.0 was released to address some of the existing criticisms. According to Ron Miller at Techcrunch, "The biggest updates involve forcing agreement among the parties before any new data can be added to the ledger, known as decentralized governance of the smart contracts."

    Although the update isn't a sea-change in the simplicity or applicability of Fabric, it does demonstrate that progress continues to be made in the cryptocurrency industry beyond the crypto-mania that occurred in 2018. Over the next five to ten years, it's expected that enterprise blockchain will undoubtedly find its proper use.

    Tue, 18 Aug 2020 17:37:00 -0500 en text/html https://www.investopedia.com/terms/h/hyperledger-fabric.asp
    Killexams : History of Artificial Intelligence

    Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future.

    Whether you’re a burgeoning start-up or an industry titan like Microsoft, there’s probably at least one part of your company working with AI or machine learning. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021.

    AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that.

    The broad strokes of AI’s history, such as the Turing Test and chess computers, are ingrained in the popular consciousness, but a rich, dense history lives beneath the surface of common knowledge. This article will distill that history and show you AI’s path from mythical idea to world-altering reality.

    Also see: Top AI Software 

    From Folklore to Fact

    While AI is often considered a cutting-edge concept, humans have been imagining artificial intelligences for millenniums, and those imaginings have had a tangible impact on the advancements made in the field today.

    Prominent mythological examples include the bronze automaton Talos, protector of the island of Crete from Greece, and the alchemical homunculi of the Renaissance period. Characters like Frankenstein’s Monster, HAL 9000 of 2001: A Space Odyssey, and Skynet from the Terminator franchise are just some of the ways we’ve depicted artificial intelligence in modern fiction.

    One of the fictional concepts with the most influence on the history of AI is Isaac Asimov’s Three Laws of Robotics. These laws are frequently referenced when real-world researchers and organizations create their own laws of robotics.

    In fact, when the U.K.’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as a reference point, though stating that Asimov’s Laws “simply don’t work in practice.”

    Microsoft CEO Satya Nadella also made mention of Asimov’s Laws when presenting his own laws for AI, calling them “a good, though ultimately inadequate, start.”

    Also see: The Future of Artificial Intelligence

    Computers, Games, and Alan Turing

    As Asimov was writing his Three Laws in the 1940s, researcher William Grey Walter was developing a rudimentary, analogue version of artificial intelligence. Called tortoises or turtles, these tiny robots could detect and react to light and contact with their plastic shells, and they operated without the use of computers.

    Later in the 1960s, Johns Hopkins University built their Beast, another computer-less automaton which could navigate the halls of the university via sonar and charge itself at special wall outlets when its battery ran low.

    However, artificial intelligence as we know it today would find its progress inextricably linked to that of computer science. Alan Turing’s 1950 paper Computing Machinery and Intelligence, which introduced the famous Turing Test, is still influential today. Many early AI programs were developed to play games, such as Christopher Strachey’s checkers-playing program written for the Ferranti Mark I computer.

    The term “artificial intelligence” itself wasn’t codified until 1956’s Dartmouth Workshop, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, where McCarthy coined the name for the burgeoning field.

    The Workshop was also where Allen Newell and Herbert A. Simon debuted their Logic Theorist computer program, which was developed with the help of computer programmer Cliff Shaw. Designed to prove mathematical theorems the same way a human mathematician would, Logic Theorist would go on to prove 38 of the first 52 theorems found in the Principia Mathematica. Despite this achievement, the other researchers at the conference “didn’t pay much attention to it,” according to Simon.

    Games and mathematics were focal points of early AI because they were easy to apply the “reasoning as search” principle to. Reasoning as search, also called means-ends analysis (MEA), is a problem-solving method that follows three basic steps:

    • Ddetermine the ongoing state of whatever problem you’re observing (you’re feeling hungry).
    • Identify the end goal (you no longer feel hungry).
    • Decide the actions you need to take to solve the problem (you make a sandwich and eat it).

    This early forerunner of AI’s rationale: If the actions did not solve the problem, find a new set of actions to take and repeat until you’ve solved the problem.

    Neural Nets and Natural Languages

    With Cold-War-era governments willing to throw money at anything that might deliver them an advantage over the other side, AI research experienced a burst of funding from organizations like DARPA throughout the ’50s and ’60s.

    This research spawned a number of advances in machine learning. For example, Simon and Newell’s General Problem Solver, while using MEA, would generate heuristics, mental shortcuts which could block off possible problem-solving paths the AI might explore that weren’t likely to arrive at the desired outcome.

    Initially proposed in the 1940s, the first artificial neural network was invented in 1958, thanks to funding from the United States Office of Naval Research.

    A major focus of researchers in this period was trying to get AI to understand human language. Daniel Brubow helped pioneer natural language processing with his STUDENT program, which was designed to solve word problems.

    In 1966, Joseph Weizenbaum introduced the first chatbot, ELIZA, an act which Internet users the world over are grateful for. Roger Schank’s conceptual dependency theory, which attempted to convert sentences into basic concepts represented as a set of simple keywords, was one of the most influential early developments in AI research.

    Also see: Data Analytics Trends 

    The First AI Winter

    In the 1970s, the pervasive optimism in AI research from the ’50s and ’60s began to fade. Funding dried up as sky-high promises were dragged to earth by a myriad of the real-world issues facing AI researching. Chief among them was a limitation in computational power.

    As Bruce G. Buchanan explained in an article for AI Magazine: “Early programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” This period, as funding disappeared and optimism waned, became known as the AI Winter.

    The period was marked by setbacks and interdisciplinary disagreements amongst AI researchers. Marvin Minsky and Frank Rosenblatt’s 1969 book Perceptrons discouraged the field of neural networks so thoroughly that very little research was done in the field until the 1980s.

    Then, there was the divide between the so-called “neats” and the “scruffys.” The neats favored the use of logic and symbolic reasoning to train and educate their AI. They wanted AI to solve logical problems like mathematical theorems.

    John McCarthy introduced the idea of using logic in AI with his 1959 Advice Taker proposal. In addition, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was designed specifically as a logic programming language and still finds use in AI today.

    Meanwhile, the scruffys were attempting to get AI to solve problems that required AI to think like a person. In a 1975 paper, Marvin Minsky outlined a common approach used by scruffy researchers, called “frames.”

    Frames are a way that both humans and AI can make sense of the world. When you encounter a new person or event, you can draw on memories of similar people and events to deliver you a rough idea of how to proceed, such as when you order food at a new restaurant. You might not know the menu or the people serving you, but you have a general idea of how to place an order based on past experiences in other restaurants.

    From Academia to Industry

    The 1980s marked a return to enthusiasm for AI. R1, an expert system implemented by the Digital Equipment Corporation in 1982, was saving the company a reported $40 million a year by 1986. The success of R1 proved AI’s viability as a commercial tool and sparked interest from other major companies like DuPont.

    On top of that, Japan’s Fifth Generation project, an attempt to create intelligent computers running on Prolog the same way normal computers run on code, sparked further American corporate interest. Not wanting to be outdone, American companies poured funds into AI research.

    Taken altogether, this increase in interest and shift to industrial research resulted in the AI industry ballooning to $2 billion in value by 1988. Adjusting for inflation, that’s nearly $5 billion dollars in 2022.

    Also see: Real Time Data Management Trends

    The Second AI Winter

    In the 1990s, however, interest began receding in much the same way it had in the ’70s. In 1987, Jack Schwartz, the then-new director of DARPA, effectively eradicated AI funding from the organization, yet already-earmarked funds didn’t dry up until 1993.

    The Fifth Generation Project had failed to meet many of its goals after 10 years of development, and as businesses found it cheaper and easier to purchase mass-produced, general-purpose chips and program AI applications into the software, the market for specialized AI hardware, such as LISP machines, collapsed and caused the overall market to shrink.

    Additionally, the expert systems that had proven AI’s viability at the beginning of the decade began showing a fatal flaw. As a system stayed in-use, it continually added more rules to operate and needed a larger and larger knowledge base to handle. Eventually, the amount of human staff needed to maintain and update the system’s knowledge base would grow until it became financially untenable to maintain. The combination of these factors and others resulted in the Second AI Winter.

    Also see: Top Digital Transformation Companies

    Into the New Millennium and the Modern World of AI

    The late 1990s and early 2000s showed signs of the coming AI springtime. Some of AI’s oldest goals were finally realized, such as Deep Blue’s 1997 victory over then-chess world champion Gary Kasparov in a landmark moment for AI.

    More sophisticated mathematical tools and collaboration with fields like electrical engineering resulted in AI’s transformation into a more logic-oriented scientific discipline, allowing the aforementioned neats to claim victory over their scruffy counterparts. Marvin Minsky, for his part, declared that the field of AI was and had been “brain dead” for the past 30 years in 2003.

    Meanwhile, AI found use in a variety of new areas of industry: Google’s search engine algorithm, data mining, and speech recognition just to name a few. New supercomputers and programs would find themselves competing with and even winning against top-tier human opponents, such as IBM’s Watson winning Jeopardy! in 2011 over Ken Jennings, who’d once won 74 episodes of the game show in a row.

    One of the most impactful pieces of AI in latest years has been Facebook’s algorithms, which can determine what posts you see and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar functions can be found on websites like Youtube and Netflix, where they predict what content viewers want to watch next based on previous history.

    The benefits of these algorithms to anyone but these companies’ bottom lines are up for debate, as even former employees have testified before Congress about the dangers it can cause to users.

    Sometimes, these innovations weren’t even recognized as AI. As Nick Brostrom put it in a 2006 CNN interview: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”

    The trend of not calling useful artificial intelligence AI did not last into the 2010s. Now, start-ups and tech mainstays alike scramble to claim their latest product is fueled by AI or machine learning. In some cases, this desire has been so powerful that some will declare their product is AI-powered, even when the AI’s functionality is questionable.

    AI has found its way into many peoples’ homes, whether via the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa. Through winters and burst bubbles, the field of artificial intelligence has persevered and become a hugely significant part of modern life, and is likely to grow exponentially in the years ahead.

    Mon, 25 Jul 2022 09:23:00 -0500 en-US text/html https://www.eweek.com/enterprise-apps/history-of-artificial-intelligence/
    Killexams : Prolifics Acquires Tier 2 Consulting Limited

    404

    The page may have moved, you may have mistyped the address, or followed a bad link.

    Visit our homepage, or search for whatever you were looking for…

    Mon, 04 Jul 2022 00:02:00 -0500 en text/html https://www.bakersfield.com/ap/news/prolifics-acquires-tier-2-consulting-limited/article_cbba0a7b-782c-5d69-8ba6-82efef64fd30.html
    Killexams : The Only Disaster Recovery Guide You Will Ever Need

    Disaster recovery (DR) refers to the security planning area that aims to protect your organization from the negative effects of significant adverse events. It allows an organization to either maintain or quickly resume its mission-critical functions following a data disaster without incurring significant loses in business operations or revenues.

    Disasters come in different shapes and sizes. They do not only refer to catastrophic events such as earthquakes, tornadoes or hurricanes, but also security incidents such as equipment failures, cyber-attacks, or even terrorism.

    In preparation, organizations and companies create DR plans detailing processes to follow and actions to take to resume their mission-critical functions.

    What is Disaster Recovery?

    Disaster recovery focuses on IT systems that help support an organization’s critical business functions. It is often associated with the term business continuity, but the two are not entirely interchangeable. DR is part of business continuity. It focuses more on keeping all business aspects running despite disasters.

    Since IT systems have become critical to business success, disaster recovery is now a primary pillar within the business continuity process.

    Most business owners do not usually consider that they may be victims of a natural disaster until an unforeseen crisis happens, which ends up costing their company a lot of money in operational and economic losses. These events can be unpredictable, and as a business owner, you cannot risk not having a disaster preparedness plan in place.

    What Kind of Disasters Do Businesses Face?

    Business disasters can either be technological, natural or human-made. Examples of natural disasters include floods, tornadoes, hurricanes, landslides, earthquakes and tsunamis. In contrast, human-made and technological disasters involve things like hazardous material spills, power or infrastructural failure, chemical and biological weapon threats, nuclear power plant blasts or meltdowns, cyberattacks, acts of terrorism, explosions and civil unrest.

    Potential disasters to plan for include:

    • Application failure
    • VM failure
    • Host failure
    • Rack failure
    • Communication failure
    • Data center disaster
    • Building or campus disaster
    • Citywide, regional, national and multinational disasters

    Why You Need DR

    Regardless of size or industry, when unforeseen events take place, causing daily operations to come to a halt, your company needs to recover quickly to ensure that you continue providing your services to customers and clients.

    Downtime is perhaps among the biggest IT expenses that a business faces. Based on 2014-2015 disaster recovery statistics from Infrascale, one hour of downtime can cost small businesses as much as $8,000, mid-size companies $74,000, and large organizations $700,000.

    For small and mid-sized businesses (SMBs), extended loss of productivity can lead to the reduction of cash flow through lost orders, late invoicing, missed delivery dates and increased labor costs due to extra hours resulting from downtime recovery efforts.

    If you do not anticipate major disruptions to your business and address them appropriately, you risk incurring long-term negative consequences and implications as a result of the occurrence of unexpected disasters.

    Having a DR plan in place can save your company from multiple risks, including:

    • Reputation loss
    • Out of budget expenses
    • Data loss
    • Negative impact on your clients and customers

    As businesses become more reliant on high availability, their tolerance for downtime has decreased. Therefore, many have a DR in place to prevent adverse disaster effects from affecting their daily operations.

    The Essence of DR: Recovery Point and Recovery Time Objectives

    The two critical measurements in DR and downtime are:

    • Recovery Point Objective (RPO): This refers to the maximum age of files that your organization must recover from its backup storage to ensure its normal operations resume after a disaster. It determines the minimum backup frequency. For instance, if your organization has a four-hour RPO, its system must back up every four hours.
    • Recovery Time Objective (RTO): This refers to the maximum amount of time your organization requires to recover its files from backup and resume normal operations after a disaster. Therefore, RTO is the maximum downtime amount that your organization can handle. If the RTO is two hours, then your operations can’t be down for a period longer than that.

    Once you identify your RPO and RTO, your administrators can use the two measures to choose optimal disaster recovery strategies, procedures and technologies.

    To recover operations during tighter RTO windows, your organization needs to position its secondary data optimally to make it easily and quickly accessible. One suitable method used to restore data quickly is recovery-in-place, because it moves all backup data files to a live state, which eliminates the need to move them across a network. It can protect against server and storage system failure.

    Before using recovery-in-place, your organization needs to consider three things:

    • Its disk backup appliance performance
    • The time required to move all data from its backup state to a live one
    • Failback

    Also, since recovery-in-place can sometimes take up to 15 minutes, replication may be necessary if you want a quicker recovery time. Replication refers to the periodic electronic refreshing or copying of a database from computer server A to server B, which ensures that all users in the network always share the same information level.

    Disaster Recovery Plan (DRP)

    Try the Veritas Disaster Recovery Planning Guide

    A disaster recovery plan refers to a structured, documented approach with instructions put in place to respond to unplanned incidents. It’s a step-by-step plan that consists of the precautions put in place to minimize a disaster’s effects so that your organization can quickly resume its mission-critical functions or continue to operate as usual.

    Typically, DRP involves an in-depth analysis of all business processes and continuity needs. What’s more, before generating a detailed plan, your organization should perform a risk analysis (RA) and a business impact analysis (BIA). It should also establish its RTO and RPO.

    1. Recovery Strategies

    A recovery strategy should begin at the business level, which allows you to determine the most critical applications to run your organization. Recovery strategies define your organization’s plans for responding to incidents, while DRPs describe in detail how you should respond.

    When determining a recovery strategy, you should consider issues such as:

    • Budget
    • Resources available such as people and physical facilities
    • Management’s position on risk
    • Technology
    • Data
    • Suppliers
    • Third-party vendors

    Management must approve all recovery strategies, which should align with organizational objectives and goals. Once the recovery strategies are developed and approved, you can then translate them into DRPs.

    2. Disaster Recovery Planning Steps

    The DRP process involves a lot more than simply writing the document. A business impact analysis (BIA) and risk analysis (RA) help determine areas to focus resources in the DRP process.

    The BIA is useful in identifying the impacts of disruptive events, which makes it the starting point for risk identification within the DR context. It also helps generate the RTO and RPO.

    The risk analysis identifies vulnerabilities and threats that could disrupt the normal operations of processes and systems highlighted in the BIA. The RA also assesses the likelihood of the occurrence of a disruptive event and helps outline its potential severity.

    A DR plan checklist has the following steps:

    • Establishing the activity scope
    • Gathering the relevant network infrastructure documents
    • Identifying severe threats and vulnerabilities as well as the organization’s critical assets
    • Reviewing the organization’s history of unplanned incidents and their handling
    • Identifying the current DR strategies
    • Identifying the emergency response team
    • Having the management review and approve the DRP
    • Testing the plan
    • Updating the plan
    • Implementing a DR plan audit

    3. Creating a DRP

    An organization can start its DRP with a summary of all the vital action steps required and a list of essential contacts, which ensures that crucial information is easily and quickly accessible.

    The plan should also define the roles and responsibilities of team members while also outlining the criteria to launch the action plan. It must then specify, in detail, the response and recovery activities. The other essential elements of a DRP template include:

    • Statement of intent
    • The DR policy statement
    • Plan goals
    • Authentication tools such as passwords
    • Geographical risks and factors
    • Tips for dealing with the media
    • Legal and financial information
    • Plan history

    4. DRP Scope and Objectives

    A DRP can range in scope (i.e., from basic to comprehensive). Some can be upward of 100 pages.

    DR budgets can vary significantly and fluctuate over time. Therefore, your organization can take advantage of any free resources available such as online DR plan templates from the Federal Emergency Management Agency. There is also a lot of free information and how-to articles online.

    A DRP checklist of goals includes:

    • Identifying critical IT networks and systems
    • Prioritizing the RTO
    • Outlining the steps required to restart, reconfigure or recover systems and networks

    The plan should, at the very least, minimize any adverse effects on daily business operations. Your employees should also know the necessary emergency steps to follow in the event of unforeseen incidents.

    Distance, though important, is often overlooked during the DRP process. A DR site located close to the primary data centre is ideal in terms of convenience, cost, testing and bandwidth. However, since outages differ in scope, a severe regional event may destroy both the primary data centre and its DR site when the two are located close together.

    5. Types of Disaster Recovery Plans

    You can tailor a DRP for a given environment.

    • Virtualized DRP: Virtualization allows you to implement DR in an efficient and straightforward way. Using a virtualized environment, you can create new virtual machine (VM) instances immediately and provide high availability application recovery. What’s more, it makes testing easier to achieve. Your plan must include validation ability to ensure that applications can run faster in DR mode and return to normal operations within the RTO and RPO.
    • Network DRP: Coming up with a plan to recover a network gets complicated with the increase in network complexity. Ergo, it is essential to detail the recovery procedure step-by-step, test it correctly, and keep it updated. Under a network DRP, data is specific to the network; for instance, in its performance and networking staff.
    • Cloud DRP: A cloud-based DR can range from file backup to a complete replication process. Cloud DRP is time-, space- and cost-efficient; however, maintaining it requires skill and proper management. Your IT manager must know the location of both the physical and virtual servers. Also, the plan must address security issues related to the cloud.
    • Data Center DRP: This plan focuses on your data center facility and its infrastructure. One key element in this DRP is an operational risk assessment since it analyzes the key components required, such as building location, security, office space, power systems and protection. It must also address a broader range of possible scenarios.

    Disaster Recovery Testing

    Testing substantiates all DRPs. It identifies deficiencies in the plan and provides opportunities to fix any problems before a disaster occurs. Testing can also offer proof of the plan’s effectiveness and hits RPOs.

    IT technologies and systems are continually changing. Therefore, testing ensures that your DRP is up to date.

    Some reasons for not testing DRPs include budget restrictions, lack of management approval, or resource constraints. DR testing also takes time, planning and resources. It can also be an incident risk if it involves the use of live data. However, testing is an essential part of DR planning that you should never ignore.

    DR testing ranges from simple to complex:

    • A plan review involves a detailed discussion of the DRP and looks for any missing elements and inconsistencies.
    • A tabletop test sees participants walk through the plan’s activities step by step. It demonstrates whether DR team members know their duties during an emergency.
    • A simulation test is a full-scale test that uses resources such as backup systems and recovery sites without an genuine failover.
    • Running in disaster mode for a period is another method of testing your systems. For instance, you could failover to your recovery site and let your systems run from there for a week before failing back.

    Your organization should schedule testing in its DR policy; however, be wary of its intrusiveness. This is because testing too frequently is counter-productive and draining on your personnel. On the other hand, testing less regularly is also risky. Additionally, always test your DR plan after making any significant system changes.

    To get the most out of testing:

    • Secure management approval and funding
    • Provide detailed test information to all parties concerned
    • Ensure that the test team is available on the test date
    • Schedule your test correctly to ensure that it doesn’t conflict with other activities or tests
    • Confirm that test scripts are correct
    • Verify that your test environment is ready
    • Schedule a dry run first
    • Be prepared to stop the test if needed
    • Have a scribe take notes
    • Complete an after-action report detailing what worked and what failed
    • Use the results gathered to update your DR plan

    Disaster Recovery-as-a-Service (DRaaS)

    Disaster recovery-as-a-service is a cloud-based DR method that has gained popularity over the years. This is because DRaaS lowers cost, it is easier to deploy, and allows regular testing.

    Cloud testing solutions save your company money because they run on shared infrastructure. They are also quite flexible, allowing you to sign up for only the services you need, and you can complete your DR tests by only spinning up temporary instances.

    DRaaS expectations and requirements are documented and contained in a service-level agreement (SLA). The third-party vendor then provides failover to their cloud computing environment, either on a pay-per-use basis or through a contract.

    However, cloud-based DR may not be available after large-scale disasters since the DR site may not have enough room to run every user’s applications. Also, since cloud DR increases bandwidth needs, the addition of complex systems could degrade the entire network’s performance.

    Perhaps the biggest disadvantage of the cloud DR is that you have little control over the process; thus, you must trust your service provider to implement the DRP in the event of an incident while meeting the defined recovery point and recovery time objectives.

    Costs vary widely among vendors and can add up quickly if the vendor charges based on storage consumption or network bandwidth. Therefore, before selecting a provider, you need to conduct a thorough internal assessment to determine your DR needs.

    Some questions to ask potential providers include:

    • How will your DRaaS work based on our existing infrastructure?
    • How will it integrate with our existing DR and backup platforms?
    • How do users access internal applications?
    • What happens if you cannot provide a DR service we need?
    • How long can we run in your data centre after a disaster?
    • What are your failback procedures?
    • What is your testing process?
    • Do you support scalability?
    • How do you charge for your DR service?

    Disaster Recovery Sites

    A DR site allows you to recover and restore your technology infrastructure and operations when your primary data center is unavailable. These sites can be internal or external.

    As an organization, you are responsible for setting up and maintaining an internal DR site. These sites are necessary for companies with aggressive RTOs and large information requirements. Some considerations to make when building your internal recovery site are hardware configuration, power maintenance, support equipment, layout design, heating and cooling, location and staff.

    Though much more expensive compared to an external site, an internal DR site allows you to control all aspects of the DR process.

    External sites are owned and operated by third-party vendors. They can either be:

    • Hot: It's a fully functional data center complete with hardware and software, round-the-clock staff, as well as personnel and customer data.
    • Warm: It’s an equipped data center with no customer data. Clients can install additional equipment or introduce customer data.
    • Cold: It has the infrastructure in place to support data and IT systems. However, it has no technology until client organizations activate DR plans and install equipment. Sometimes, it supplements warm and hot sites during long-term disasters.

    Disaster Recovery Tiers

    During the 1980s, two entities, the SHARE Technical Steering Committee and International Business Machines (IBM) came up with a tier system for describing DR Service levels. The system showed off-site recoverability with tier 0 representing the least amount and tier 6 the most.

    A seventh tier was later added to include DR automation. Today, it represents the highest availability level in DR scenarios. Generally, as the ability to recover improves with each tier, so does the cost.

    The Bottom Line

    Preparation for a disaster is not easy. It requires a comprehensive approach that takes everything into account and encompasses software, hardware, networking equipment, connectivity, power, and testing that ensures disaster recovery is achievable within RPO and RTO targets. Although implementing a thorough and actionable DR plan is no easy task, its potential benefits are significant.

    Everyone in your company must be aware of any disaster recovery plan put in place, and during implementation, effective communication is essential. It is imperative that you not only develop a DR plan but also test it, train your personnel, document everything correctly, and Strengthen it regularly. Finally, be careful when hiring the services of any third-party vendor.

    Need an enterprise-level disaster recovery plan for your organization? Veritas can help. Contact us now to receive a call from one of our representatives.

    The Veritas portfolio provides all the tools you need for a resilient enterprise. From daily micro disasters to a “black swan” event, Veritas covers at scale. Learn more about Data Resiliency.

    Fri, 28 Feb 2020 15:12:00 -0600 en-US text/html https://www.veritas.com/information-center/disaster-recovery-guide
    Killexams : SingleStore announces $116M financing led by Goldman Sachs Asset Management

    SingleStore, the cloud-native database built for speed and scale to power data-intensive applications, today announced it has raised $116 million in financing led by the growth equity business within Goldman Sachs Asset Management (Goldman Sachs) with new participation from Sanabil Investments. Current investors Dell Technologies Capital, GV, Hewlett Packard Enterprise (HPE), IBM ventures and Insight Partners, among others, also participated in the round.

    “By unifying different types of workloads in a single database, SingleStore supports modern applications, which frequently run real-time analytics on transactional data,” said Holger Staude, managing director at Goldman Sachs. “The company aims to help organizations overcome the challenges of data intensity across multi-cloud, hybrid and on-prem environments, and we are excited to support SingleStore as it enters a new phase of growth.”

    “Our purpose is to unify and simplify modern data,” said SingleStore CEO Raj Verma. “We believe the future is real time, and the future demands a fast, unified and high-reliability database — all aspects in which we are strongly differentiated. I am very excited to partner with Goldman Sachs, the beacon of financial institutions, and further expand our relationship.”

    “At Siemens Global Business Services, we rely on SingleStore to drive our Pulse platform, which requires us to process massive amounts of data from disparate sources,” said Christoph Malassa, Head of Analytics and Intelligence Solutions, Siemens. “The speed and scalability SingleStore provides has allowed us to better serve both our customers and our internal team, and to expand our capabilities along with them, e.g. enabling online analytics that previously had to be conducted offline.”

    The funding comes on the heels of the company’s latest onboarding of its new chief financial officer, Brad Kinnish and today, the company is pleased to welcome Meaghan Nelson as its new general counsel. These two strategic executive hires infuse a great depth of experience to the C-suite, making it even more equipped to explore future paths for company growth.

    “I am beyond thrilled to join the team at SingleStore,” said Kinnish. “It’s such an exciting time in the database industry. Major forces such as the rise in cloud and the blending of operational and transactional workloads are causing a third wave of disruption in the way data is managed. SingleStore by design is a leader in the market, and I am confident we will achieve a lot in the coming year.”

    SingleStore’s new general counsel, Meaghan Nelson, brings over a decade of legal experience to SingleStore, including her latest role as associate general counsel at SaaS company, Veeva Systems, as well as prior roles in private practice taking companies such as MaxPoint Interactive, Etsy, Natera and Veeva through their IPOs.

    “I couldn’t be more excited to join SingleStore at this important inflection point for the company,” said Nelson. “I feel that my deep experience working closely with companies through the IPO process along with my experience in scaling G&A orgs will be of great value to SingleStore as we continue to achieve new heights.”

    Previous investments from IBM ventures, HPE and Dell have fueled SingleStore’s strong momentum. It recently launched SingleStoreDB with IBM as well as announced a partnership with SAS to deliver ultra-fast insights at lower costs. The company has almost doubled its headcount in the last 12 months and continues to aggressively hire to meet the demand for its product and services.

    This funding follows SingleStore’s recent product release that empowers customers to create fast and interactive applications at scale and in real time. SingleStore will feature and demo these enhancements at a virtual launch event, [r]evolution 2022, tomorrow, July 13. Register and learn more about the event here.

    Tue, 12 Jul 2022 09:38:00 -0500 en-US text/html https://sdtimes.com/singlestore-announces-116m-financing-led-by-goldman-sachs-asset-management/
    Killexams : SD Times news digest: Boomi Blueprint Framework for Data Management, Microsoft to end Windows’ PHP 7.2 support, and Instana enterprise enhancements

    Boomi’s Blueprint framework includes leadership guidance, design practice, and implementation practices. 

    “This set of best practices provides companies with the ability to respond to disruptive forces and quickly adapt their digital platform towards desired business vision and outcomes,” Boomi wrote in a post.

    In addition, leadership guidance provides the Digital Ideation Lab, a Boomi innovation pop-up lab where digital certified will explore technologies, develop prototypes, and create reference architectures for rapid business deployment.

    Microsoft to end Windows’ PHP 7.2 support
    Microsoft stated that PHP 7.2 will go out of support this November. 

    Meanwhile, PHP 7.3 will be going into security fix mode fix mode in November, and 7.4 will have two more years of support from that point. 

    Microsoft will not support PHP for Windows in any capacity for version 8.0 and beyond.

    Instana enterprise enhancements
    Instana launched enterprise enhancements to help organizations manage mission critical applications more effectively. 

    New features include custom dashboards, NGINX Tracing, the IBM MQ Monitoring Sensor and Redis Enterprise Monitoring Sensor, and role-based access control.

    “Unlike traditional APM tools, Instana’s automated APM solution discovers all application service components and application infrastructure, including infrastructure such as AWS Lambda, Kubernetes and Docker,” Instana wrote in a post.

    UiPath announces $225 million funding
    UiPath said it will use the funding to invest more in its research and development of automation solutions. 

    “We will advance our market-leading platform and will continue to deepen our investments in AI-powered innovation and expanded cloud offerings,” said Daniel Pines, the co-founder and CEO of UiPath. “COVID-19 has heightened the critical need of automation to address challenges and create value in days and weeks, not months and years. We are committed to working harder to help our customers evolve, transform, and succeed fast in the new normal.”

    UiPath released its end-to-end hyperautomation platform in May 2020. Additional details are available here.

    Apache weekly updates
    New releases from Apache last week included Apache JackRabbit 2.21.2, an unstable release cut directly from the trunk with a focus on new features and other improvements. 

    This week also saw Apache Tomcat 7.0.105, 8.5.57, 9.0.37, and 10.0.0-M7 released, containing a number of bug fixes and improvements compared to version 7.0.104. 

    ApacheCon is set for an online event on 29 September – 1 October. Additional details are available here.

    Tue, 12 Jul 2022 12:00:00 -0500 en-US text/html https://sdtimes.com/data/sd-times-news-digest-boomi-blueprint-framework-for-data-management-microsoft-to-end-windows-php-7-2-support-and-instana-enterprise-enhancements/
    P2080-088 exam dump and training guide direct download
    Training Exams List