Download duplicate Questions of S90-01 exam that showed up in actual test today

There are many audits of present on the web that will cause you to feel that you have tracked down the specific wellspring of legitimate Fundamental SOA & Service-Oriented Computing cheat sheets. Practically every one of the up-and-comers finishes their tests thinking carefully that contains actual test questions and replies. Retaining and rehearsing S90-01 Practice test is adequate to pass with good grades.

Exam Code: S90-01 Practice test 2022 by team
S90-01 Fundamental SOA & Service-Oriented Computing

This course provides a well-rounded, end-to-end overview of service-oriented computing, service-orientation and SOA. Attendees benefit from this fundamental coverage by gaining an understanding of common terms, concepts and important industry developments.

The following primary subjects are covered:
– Strategic Goals of Service-Oriented Computing
– Fundamental Service-Oriented Computing Terms
– Concepts relating to Services, Service-Oriented Architecture and Service Compositions
– Introduction to the Service-Orientation Design Paradigm and related Principles and Concepts
– SOA Project Delivery Approaches and Planning
– Introduction to the Service Delivery Lifecycle, including Service-Oriented Analysis, Service-Oriented Design and Service Modeling
– SOA Adoption Impacts and Requirements
– Enterprise Service Bus, Web Services, REST Services
– Service Grids and Service Virtualization
– Cloud Computing and SOA Connection Points

This course can be taken as part of instructor-led workshops taught by Arcitura Certified Trainers. These workshops can be open for public registration or delivered privately for a specific organization. Certified Trainers can teach workshops in-person at a specific location or virtually using a video-enabled remote system, such as WebEx. Visit the Workshop Calendar page to view the current calendar of public workshops or visit the Private Training page to learn more about Arcituras worldwide private workshop delivery options.

Below are the base materials provided to public and private workshop participants. Some public and private workshops offer promotions whereby participants receive entire Self-Study Kits for attending workshops, or are offered discounts for the purchase of Self-Study Kits and/or Pearson VUE test vouchers.

Note that all SOACP course modules are focused on vendor-neutral subjects and therefore do not provide detailed coverage of any vendor-specific platforms or technologies. SOACP courses are intentionally authored this way so as to provide an unambiguous and objective and industry-level understanding of practices and technology that can be further complemented with vendor-specific training.

Fundamental SOA & Service-Oriented Computing
SOA Service-Oriented basics
Killexams : SOA Service-Oriented basics - BingNews Search results Killexams : SOA Service-Oriented basics - BingNews Killexams : Dragan Gasevic’s home page

Model-Driven Development of Families of Semantically-enabled Service-oriented Architectures

Grant Provider: Alberta Ingenuity
Program: New Faculty Award
Principal investigator: Dragan Gasevic
Duration: 2009-2011

Stimulated by the huge wealth of opportunities for collaboration and communication offered by the World Wide Web and Internet technologies, there is today a rapidly growing demand for novel ways to integrate business processes in almost all sectors. A major goal is to provide flexible and effective methods to integrate the software systems of collaborating parties. Service-oriented architectures (SOAs) appear to be the most promising paradigm for distributed computing, by introducing new approaches to how software applications are designed, architected, delivered and used. According to the most commonly used definition, services, as the first-class citizens of SOAs, are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols for the purpose of building networks of collaborating applications distributed within and across organizational boundaries. The most mature realization of SOAs is Web services that provide the basis for the development and execution of business processes that are distributed over the network and available via a set of widely-accepted standards.

Success stories of the use of SOAs (such as Amazon and Google) demonstrate a high potential and standards offer a guaranty for industrial adoption. Yet, the technological realization of the vision is still in its infancy. One important aspect is to enable (semi-)automatic ways for discovery, composition, and monitoring of services shared by different parties. The Semantic Web (also known as Web 3.0) offers a promising solution to address the problems of automation by using formally defined relations among concepts (e.g., words) used by members of a community (e.g., pharmacists). Having in mind the lack of sound mechanisms for the design and development of SOAs, this additionally increases the level of complexity in the development process by introducing the need to use additional standards and technologies. This is why developers need methods to help them re-use the same design in cases when they need to rapidly develop SOAs that share a common set of characteristics, and yet allow for some variability. That is to say, developers need methods to design families of semantically-enabled SOAs.

The main goal of this project is to provide a set of software engineering methods for developing families of semantically-enabled SOAs. In this project, we will explore the use of a well-proven concept of software product lines (SPL) embraced by many well-known industrial players (e.g., Microsoft and Nokia), as the SPL approach can increase productivity by as much as five to ten times. Assuming that both the SPL and SOA approaches are based on re-usable software components, that is, core assets and services, this project will provide a set of methods for bridging the gap in their underlying development philosophies. SPLs use software components internally developed by following a well-defined methodology to precisely cover the domain of the SPL, while SOA is grounded on the idea of open integration of business processes by means of shared services. The originality of our proposal is in the use of ontologies to bridge this gap between the "open" SOA and "closed" SPL domains by leveraging ontologies' features to precisely and formally define a domain and yet allow for sharing domain knowledge between collaborating parties. To avoid the problem of the complexity stemming from the use of additional technology into the already overwhelmed set that SOA developers have to use, we propose the use of a novel software engineering discipline - Model-Driven Engineering (MDE). By using MDE, we enable designers to focus on solving domain-specific problems rather than on technical details of SOA, taking advantage of proven MDE benefits, including but not limited to, productivity and reliability. Yet, our SOAs will be compliant with standards (e.g., SAWSDL, WSDL, XML, and OWL) due to the use of MDE to automate the development process. However, the originality of our approach is that we integrate conceptualizations of "closed" SPL and "open" SOA domains. To support this idea, we will design a set of mechanisms that will allow for a synergetic use of proven modeling languages widely adopted by both the SPL and SOA communities to serve the task of designing families of semantically-enabled SOAs. We will also Improve the quality of SOAs by providing additional verification mechanisms based on Semantic Web (service) reasoning. Moreover, our results have major potentials to trace a road for the wider and faster adoption of the next-generation SOAs based on advanced levels of automatic processing. As such, our results will provide software developers with methods to use proven software engineering practices for developing SOAs, and thus better respond to the rapidly growing demands of the service economy.

Fri, 19 Nov 2021 02:26:00 -0600 text/html
Killexams : Service-oriented architecture (SOA)
  • Choice and ease of use: key elements of a good app store

    Companies need to focus on individual user requirements and make sure their app store delivers the right tools for the right job  Continue Reading

  • What ingredients go into the successful app store?

    An enterprise app store should provide the controlled availability of apps for employees.We look at what is available for organisations to use  Continue Reading

  • A guide to smart home automation

    Unless it is a new build, the challenge in creating a smart home is that technology must work irrespective of the age of the property  Continue Reading

  • Why agile development races ahead of traditional testing

    Traditional testing practices optimise large, centralised testing but struggle to support the rapid delivery of agile development.  Continue Reading

  • The limitations of traditional SaaS integration approaches

    Time and budget constraints are forcing many organisations to rethink their integration strategy and find a well-aligned solution  Continue Reading

  • How to integrate SaaS with your local business systems

    As software as a service matures, CIOs need to look at how well cloud-based products integrate with existing on-premise systems  Continue Reading

  • Outsourcing IT services: goals and negotiations

    Tactical and strategic negotiations are important to businesses of all sizes, from SMEs to the largest enterprises  Continue Reading

  • Case study: Nokia rethinks HR with web portal

    Mobile phone manufacturer Nokia has reduced its HR costs by between 20% to 30% after rolling out a web-based HR portal for its 60,000 employees.  Continue Reading

  • CIO interview: Martin Davies, head of technology at Bet365

    The success story of £5.4bn-per-year online gambling company Bet365 is nothing to do with luck and everything to do with technology.  Continue Reading

  • ING branches out with SOA mortgage app

    ING Direct has completed "its most complex IT project to date" using an onshore and offshore programming team from Capgemini to support the bank's move to become a mortgage provider.  Continue Reading

  • Siebel expands SOA capabilities with IBM

    Siebel Systems will advance its platform with service-oriented architecture composite applications in the IBM technology environment.  Continue Reading

  • Blue Titan fine tunes SOA tool

    Attempting to fine-tune its software for use in business-critical applications, Blue Titan Software will unveil Network Director...  Continue Reading

  • SOAP

    It's nice to know those higher up the channel have such a positive opinion of their reseller partners. One distributor Soap was...  Continue Reading

  • Mon, 20 Jan 2020 12:10:00 -0600 en text/html
    Killexams : RAMI 4.0 Reference Architectural Model for Industrie 4.0
    • By Bill Lydon
    • Automation IT
    RAMI 4.0 Reference Architectural Model for Industrie 4.0
    Three-dimensional map showing how to approach Industry 4.0 in a structured manner 
    By Bill Lydon

    The RAMI 4.0, Reference Architecture Model Industrie 4.0 (Industry 4.0), was developed by the German Electrical and Electronic Manufacturers' Association (ZVEI) to support Industry 4.0 initiatives, which are gaining broad acceptance throughout the world. Industry 4.0 (also termed Industrie 4.0) is a holistic view of manufacturing enterprises, started in Germany, with many worldwide cooperative efforts including China, Japan, and India. Industry 4.0 concepts, structure, and methods are being adopted worldwide to modernize manufacturing.

    Effective manufacturing

    Throughout the world, there is a recognition that to be competitive, manufacturing needs to modernize. The Industry 4.0 movement in particular continues to accelerate defining the pattern of how all industrial automation can achieve the goal of holistic and adaptive automation system architectures. A driving force behind the development of Industry 4.0 is the realization that pursuing low labor rates is not a winning strategy. Remaining competitive and flexible can only be accomplished by leveraging advanced technologies, centering on automation to enable a successful transition. Germany's Industrie 4.0 initiative has ignited cooperative efforts in China, Japan, and India.

    Industry 4.0 is interdisciplinary, where the standards applicable in mechanical engineering, electronics, electrical engineering, and communications and information technology need to be combined with the respective technologies needed for their implementation.

    Discrete and process industries

    The development of RAMI 4.0 focused on industrial production as the primary area of application, including discrete manufacturing to process industries. Industry 4.0 concepts are being applied to process industries to achieve a holistic integration of automation, business information, and manufacturing execution function to Improve all aspects of production and commerce across process industry value chains for greater efficiency. The "Process Sensor 4.0 Roadmap" initiated by NAMUR and VDI/VDE, in collaboration with several prominent leaders in the industry (including ABB, BASF, Bayer Technology Services, Bilfinger Maintenance, Endress+Hauser, Evonik, Festo, Krohne, Lanxess, Siemens, and Fraunhofer ICT), reflects the intent of creating fundamental building blocks to advance process automation system architectures. A number of NAMUR working groups are part of Working Area 2 (WA 2), Automation Systems for Processes and Plants.

    Related to this activity, the OPC Foundation and FieldComm Group have an initiative to create a protocol-independent, process automation device information model (PA-DIM) specification based on the industrial interoperability standard OPC UA. PROFIBUS/PROFINET International is now participating in this vision, which is supported by NAMUR as part of its Open Architecture (NOA) initiative. The goal is enabling end users to dramatically reduce time to implement advanced analytics, big data projects, and enterprise cloud solutions that rely on information from thousands of geographically dispersed field devices using multiple process automation protocols.

    RAMI 4.0 is a three-dimensional map showing the most important aspects of Industrie 4.0. It ensures that all participants involved share a common perspective and develop a common understanding,” explains Kai Garrels, chair of the working group Reference Architectures, Standards and Norms at the Plattform Industrie 4.0, and head of standardization and industry relations at ABB (

    RAMI 4.0 definition

    The RAMI 4.0 Reference Architectural Model and the Industry 4.0 components give companies a framework for developing future products and business models. RAMI 4.0 is a three-dimensional map showing how to approach the deployment of Industry 4.0 in a structured manner. A major goal of RAMI 4.0 is to make sure that all participants involved in Industry 4.0 discussions and activities have a common framework to understand each other. The RAMI 4.0 framework is intended to enable standards to be identified to determine whether there is any need for additions and amendments. This model is complemented by the Industry 4.0 components. Both results are described in DIN SPEC 91345 (Reference Architecture Model Industrie 4.0). DIN ( represents German interests within the International Organization for Standardization (ISO). Today, roughly 85 percent of all national standard projects are European or international in origin.

    Putting the RAMI 4.0 model in perspective, in the glossary of the VDI/VDE-GMA 7.21 Industrie 4.0 technical committee, a reference model is defined as a model that can be generally applied and can be used to derive specific models. There are many examples of this in the field of technology. The most well known is the seven-layer ISO/OSI model, which is used as a reference model for network protocols. The advantage of using such models is a shared understanding of the function of every layer/element and the defined interfaces between the layers.

    Important characteristics

    RAMI 4.0 defines a service-oriented architecture (SOA) where application components provide services to the other components through a communication protocol over a network. The basic principles of SOA are independent of vendors, products, and technologies. The goal is to break down complex processes into easy-to-grasp packages, including data privacy and information technology (IT) security.

    ZVEI characterizes the changing manufacturing systems. The current "Old World Industry 3.0" manufacturing system characteristics are:

    • hardware-based structure
    • functions bound to hardware
    • hierarchy-based communication
    • isolated product

    The "New World: Industry 4.0" manufacturing system characteristics are:

    • flexible systems and machines
    • functions distributed throughout the network
    • participants interact across hierarchy levels
    • communication among all participants
    • product part of the network
    • RAMI 4.0 structure

    RAMI 4.0 consists of a three-dimensional coordinate system that describes all crucial aspects of Industry 4.0. In this way, complex interrelations are broken down into smaller and simpler clusters.

    "Hierarchy Levels" axis

    On the right horizontal axis are hierarchy levels from IEC 62264, the international standards series for enterprise IT and control systems. These hierarchy levels represent the different functionalities within factories or facilities. (Note that the IEC 62243 standard is based upon ANSI/ISA-95.) To represent the Industry 4.0 environment, these functionalities have been expanded to include work pieces, labeled "Product," and the connection to the Internet of Things and services, labeled "Connected World."

    "Life Cycle Value Stream" axis

    The left horizontal axis represents the life cycle of facilities and products, based on IEC 62890, Life-cycle management for systems and products, used in industrial-process measurement, control, and automation. Furthermore, a distinction is made between "types" and "instances." A "type" becomes an "instance" when design and prototyping have been completed and the real product is being manufactured. The model also combines all elements and IT components in the layer and life-cycle model.

    "Layers" axis

    The six layers on the vertical axis describe the decomposition of a machine into its properties, structured layer by layer, i.e., the virtual mapping of a machine. Such representations originate from information and communication technology, where properties of complex systems are commonly broken down into layers.

    Within these three axes, all crucial aspects of Industry 4.0 can be mapped, allowing objects such as machines to be classified according to the model. Highly flexible Industry 4.0 concepts can thus be described and implemented using RAMI 4.0. The model allows for step-by-step migration from the present into the world of Industry 4.0.

    Benefits of RAMI 4.0

    The model integrates different user perspectives and provides a common way of seeing Industry 4.0 technologies. With RAMI 4.0, requirements of sectors-from manufacturing automation and mechanical engineering to process engineering-can be addressed in industry associations and standardization committees. Thus, RAMI 4.0 brings a common understanding for standards and use cases.

    RAMI 4.0 can be regarded as a map of Industry 4.0 solutions. It is an orientation for plotting the requirements of sectors together with national and international standards to define and further develop Industry 4.0. There is a refreshing interest with Industry 4.0 initiatives for various organizations to work cooperatively and overcome the compartmentalization of the national standardization bodies.

    The challenge

    The influx of technology is starting to dramatically Improve manufacturing. However, to do this effectively takes planning, and the RAMI 4.0 model is a focal point for understanding the entire manufacturing and supply chain.

    Reader Feedback

    We want to hear from you! Please send us your comments and questions about this Topic to

    Like This Article?

    Subscribe Now!

    About The Authors

    Bill Lydon is an InTech contributing editor with more than 25 years of industry experience. He regularly provides news reports, observations, and insights here and on


    The German ZVEI ( industrial association, founded in 1918, represents the interests of the high-tech sector with a wide portfolio. ZVEI is committed to the common interests of the electrical industry in Germany and internationally. This commitment is supported by about 160 employees in the main office and 5,000 employees from the member companies in an honorary capacity.

    ZVEI is based in Frankfurt with offices in Berlin and Brussels. Through its EuropeElectro working group, ZVEI also has an office in Beijing. More than 1,600 companies, which employ about 90 percent of the staff of the electrical industry in Germany, have opted for membership in ZVEI. Its members include global, medium-sized, and family-owned companies. The sector has 868,000 employees in Germany, plus more than 736,000 employees all over the world.

    The basis of the association’s work is the exchange of experience and views between the members about current technical, economic, legal, and socio-political subjects in the electrical industry. From this exchange, common positions are drawn up, including proposals on research, technology, environmental protection, education, and science policy; ZVEI is a pacemaker of technological progress. It also supports market-related international standardization work.

    The association works with national business associations and organizations, European industry and trade associations, and international organizations. It is divided into 22 trade associations that comprise all member companies, each operating in the same market segment. In addition, ZVEI maintains nine state offices in Germany that represent the interests of the electrical industry in the country. Since June 2014, Michael Ziesemer, vice chairman of the board of the Endress + Hauser Group, has been president of the ZVEI. Klaus Mittelbach has been the executive director of the ZVEI management since 2008.

    Wed, 10 Apr 2019 21:06:00 -0500 en text/html
    Killexams : Book Review: Applied SOA

    Applied SOA is a new book on Service Oriented Architecture written by 4 leading SOA practitioners: Michael Rosen, Boris Lublinsky, Kevin Smith and Marc Balcer. This book is a handbook that aims at making you successful with you SOA implementation. We asked a few questions to the authors before reviewing the book.

    In addition to our review, InfoQ was able to obtain a trial chapter. Chapter 3: Getting Started with SOA can be downloaded here.

    InfoQ: What has been the major hurdles people encountered in their SOA?

    Boris: I think that majority of people are more interested in the SOA technologies, aka Web services, rather then in business modeling and decomposition -i.e. the SOA foundation. For example, a typical SOA debate is Rest vs SOAP, not why do we need to use services. Do not get me wrong, technology debates are important and I enjoy them, but as a result, in today's reality JBOWS (Just a Bunch Of Web Services) rules. That is why, in the book we deliberately were staying away from these and other implementation debates and were trying to address the heart of SOA - its architectural underpinnings. 

    Kevin: We have painfully experienced that the lack of planning and governance has led to many chaotic SOA implementations. As Boris mentioned, many times, JBOWS will quickly get deployed without any focus on planning, semantic interoperability or enterprise data standards, and this makes integration difficult as new business partners eventually want to use these services. When services are deployed without plans related to service change management, enterprise management and real-time operations gets to be quite difficult and painful. One of the reasons that we wanted to write this book is to provide guidance based on the lessons that we have learned in this area, focusing on practical implementation – including planning, management, and governance.

    Another hurdle that we have seen is security. Real business solutions demand real solutions for security, and this is sometimes underestimated or overlooked in new SOA projects – with dire consequences. The “alphabet soup” of overlapping (and sometimes competing) standards, specifications and technologies used for securing SOA can be overwhelming, and most security standards for SOA include various options, each demanding an in-depth knowledge of the fundamentals of information security. We have found that some of the major challenges for businesses are identity and attribute federation between business partners, the subtleties of identity and attribute propagation, tradeoffs between security approaches and performance and availability, and access control policy management and enforcement – just to name a few. The questions that we see people asking are, “How do we get started?”, “What are our options for building security into our SOA?”, “How do we balance security and performance?” and “How do we actually apply the standards?”  We are answering these questions in our book, providing a practical guide for SOA security, with solution blueprints and decision flows to help this decision-making process easier.   

    Michael: As with most new technology solutions, the challenges are not technical but rather are around motivating and managing the structural changes required to effectively utilized the new technologies. Anyone can build a ervice. The challenge is building the right services, and doing so within the context of the enterprise rather than the limited scope of a single project. These are issues that we specifically address with the architecture presented in our book.

    InfoQ: "Lack of skills" is often considered as the #1 problem in SOA, how can a company go about solving it, beyond sharing Applied SOA with their architects and analysts?

    Boris: "Lack of skill" is a serious problem, the question is which skills are we talking about? In my opinion, SOA is more business problem then technical, so bringing together even very skilled technical people will not always solve SOA pain. We need "true architects" to advance SOA. I would love to say - "Buy a book - it will solve all of your problems", but realistically, reading a book may help to pinpoint the issues and evaluate some of the solution approaches, but the real solutions still have to come from inside the company  

    Kevin: I would like to echo that. We need technologists with a good background in architecture and design, first of all – and some of this guidance is in our book. Probably as important is the fact that companies need analysts who understand the business problems for a particular domain. For example, if you are responsible for an enterprise SOA for a medical community, and you don’t actually have people in your company who intimately understand that medical community’s internal processes, then you aren’t going to do a good job. A company really needs to hire expertise from the business domain they are supporting, because as Boris said, SOA is more about solving business problems than technical.

    Michael: There is lots of training available today on important subjects such as business process design, SOA analysis and design, architecture etc. The important thing for a company to figure out first is what processes and approaches will work for them given their requirements, environment, culture and so on, and then to ramp up on the specific skills that will actually provide them value. Often some sort of competency center is a good approach to building a critical mass of the appropriate skills.

    InfoQ: Where are we with SOA today, is it real? where is the level of maturity going? What's next?

    Boris: I think SOA today is real, but not mature. The majority of companies are opting for a low hanging fruit - Service Oriented Integration (SOI). As indicated in the recent Barton's group report this is today a prevalent reason for SOA failure. Unless we will start to seriously link SOA with enterprise business model and organizational structure it will probably never fully mature and live to its expectations. 

    Kevin: I would say that SOA is real, but the implementing technologies are not yet meeting the vision of what SOA can be. Many vendors push for you to buy their product suite for a “SOA in a box” solution, where interoperability works well if you use all of their products, but not so much if you integrate with other systems. Many web service toolkits have convenient point-and-click component to web service tools, and so developers find it easy to use them, and the result is that the services are no more well-designed than the initial components. Typical POJOs are not initially designed with a corresponding schema in mind, so when they are transformed into web services, semantic operability typically loses as the resulting service is very literally tightly coupled to the object’s implementation.  I do think that the implementing technologies for SOA are getting much better, but it does take some discipline for architects, designers, and programmers to not choose the quick-and-dirty approaches that are offered.

    I think the hype factor is dying down on SOA, and that is a good thing. In the past, SOA evangelist salesmen were trying to sell a utopian vision of a SOA that brings world peace and is the silver bullet for all that ails you. It is important to know that SOA can help solve your business problems, and there are many technologies and standards used for SOA implementation. It is also important to know that SOA itself is technology agnostic. I think SOA as an architecture discipline will continue to mature and has a bright future.  

    Michael: SOA is definitely real. We know of many companies that have had SOA in place for 10 years. All the major tools, platforms and packaged applications are moving to SOA and more and more SaaS services are available all the time. But, the majority of companies today are just getting started with SOA. On the typical maturity scale of 0 to 5, most are probably around 1, so now is definitely the time for them to focus on architecture rather than services.

    Book Review

    All subjects are treated in depth and should be applicable readily to solve a lot of the issues that you might encounter as you implement a Service Oriented Architecture whether you are an architect, analyst, designer or CTO/CIO. All subjects are illustrated with concrete and relevant examples, directly coming from real-world projects:

    The current collection of SOA books and articles is rich on high-level theory by light on practical advice

    In particular, this book will help you tie your SOA initiative with your Enterprise Architecture, IT Governance, Core Data and BPM initiatives.

    The authors have noted that after working with different companies, there are several common areas of confusion:

    • First, what is SOA, and how it differs from Web Services or other distributed technologies?
    • Second, what is the relationship between the business and SOA?
    • Third, how do you design a good service?
    • Fourth, how do you effectively integrate existing applications and resources into a service-oriented solution?
    • Finally, how do services fit into overall enterprise solutions?

    In particular, the authors argue that if it is easy to build "a" service, because the tools have reached a high level of maturity, it is still a challenge to build a "good" service based on solid design principles that fits into an overall architecture and can be combined into larger business processes within the enterprise.

    The book is divided into three parts:

    • Understanding SOA
    • Designing SOA
    • Case Studies

    A key to understand SOA is to understand the challenges it tackles:

    • Reuse
    • Efficient Development
    • Integration of applications and Data
    • Agility, Flexibility and alignment

    From this starting point, the authors develop an understanding of what is needed to achieve these goals (Chapter 3: Getting Started with SOA can be  downloaded here). From their perspective, one must focus on:

    • Methodology and Service lifecycle
    • Reference Architecture
    • Business Architecture
    • Information Design
    • Identifying Services
    • Specifying Services

    One of the points that they emphasize is the importance of the Information Architecture when implementing a Service Oriented Architecture and in particular the role its plays in the definition of the service interface.

    In the second part, the authors share a deep level of expertise. Their approach is decidedly focused on the knowledge of the business. They rely on business context, business domain and business process models to identify services. Chapter 5 is dedicated to the relationship between information modeling and service design: 

    A fundamental difference between service operations and object methods is that service operations have a much larger granularity. Rather than many simple operations with simple parameters, services produce and consume big chunks of a document.

    This relationship is one of the toughest problems that needs to be solved in a Service Oriented Architecture, regardless of the technology that you are using. It helps with identifying appropriate interface footprints (which are easier to reuse by new consumers) and it helps with analyzing the impact of information model changes on service interfaces (versioning).

    Chapter 6 looks at the design of service interfaces. This chapter reviews interaction styles and provides step-by-step design guidelines to help with each aspect of the service: documents, operations, exceptions...

    Chapter 7 provides service implementation guidelines based on a Service Architecture which includes:

    • Service Interface Layer
    • Service Business Layer
    • Resource Access Layer

    In particular, the authors provide the detailed responsibilities of each of these layers.

    Chapter 7 looks at Service Composition.

    Service Composition is one of the great benefits of using SOA

    The chapter covers the Architecture Models of service composition as well as the different implementation models which include: plain old code, service component architecture, event-based and orchestration-based composition. The chapter also provides an in-depth discussion about the relationships between compositions and business rules, transactions and human activities. 

    Chapter 9 shifts gears and focuses on using services to build enterprise solutions. This Topic remains one of the least understood amongst architects and analysts.

    Building enterprise solutions typically requires leveraging existing enterprise applications for service implementations and combining multiple existing services into enterprise solutions.

    The authors offer an adaptation of the Model-View-Controller pattern as the foundation of the architecture of Service Oriented Enterprise Solutions. The chapter also offers important discussion on service versioning, security, exception handling and logging, and management and monitoring, which are all important aspects of enterprise solutions.  

    Integration is an important part of SOA. Chapter 10 is dedicated at exploring the role that integration plays in SOA. The authors identify several "islands" that require some levels of integration:

    • Islands of data
    • Islands of automation
    • Islands of security

    In the authors' opinion, the role of SOA is to rationalize information, activities and identities to enable existing and new consumers to reuse functionality from legacy systems of record and properly align their state. The chapter provides several patterns which can be used to efficiently implement services from legacy systems.

    Chapter 11 introduces the concepts of SOA security. The authors start by providing a thorough introduction to the WS-Security standards and discuss important subjects such as auditing, authorization and user identity propagation.

    Chapter 12 concludes this section with an practical SOA governance and service lifecycle framework. This chapter provides in depth knowledge to identify, implement, deploy, operate and version service. The authors introduce the OMG's Reusable Assets Specification (RAS) that can potentially be used to capture metadata about services. The chapter also covers run-time policies.

    The last section introduces two use cases:

    • Travel Insurance
    • Service-Based Integration in Insurance

    Each use case is developed in depth with a trial of each artifacts recommended in the previous section of the book. These use cases represent state-of-the-art SOA implementation of enterprise class solutions.

    Applied SOA represent a complete introduction to SOA with practical steps to set up an organization capable of delivering complex service oriented solutions.

    Wed, 15 Jun 2022 12:00:00 -0500 en text/html
    Killexams : SOA Agents: Grid Computing meets SOA

    SOA has made huge progress in recent years. It moved from experimental implementations by software enthusiasts to the main stream of today's IT. One of the main drivers behind such progress is the ability to rationalize and virtualize existing enterprise IT assets behind service interfaces which are well aligned with an enterprise business model and current and future Enterprise processes. Further, SOA progress was achieved through the introduction of the Enterprise Service Bus - a pattern for virtualization of a services infrastructure. By leveraging mediations, service location resolution, service level agreement (SLA) support, etc., ESB allows software architects to significantly simplify a Services infrastructure. The last piece missing in the overall SOA implementation is Enterprise Data Access. A possible solution for this problem, Enterprise Data Bus (a pattern for ubiquitous access to the enterprise data), was introduced in [1,2]. EDB adds the third dimension to SOA virtualization, which can be broken down as follows:

    • Services - virtualization of the IT assets;
    • ESB - virtualization of the enterprise services access
    • EDB - virtualization of the enterprise data access.

    In other SOA developments, several publications [3,4,5] suggested the use of Grid technology for improving scalability, high availability and throughput in SOA implementations. In this article, I will discuss how Grid can be used in the overall SOA architecture and introduce a programming model for Grid utilization in services implementation. I will also discuss an experimental Grid implementation that can support this proposed architecture.

    SOA with EDB - overall architecture

    Following [1,2] the overall architecture of the SOA with EDB is presented in Figure 1.

    Figure 1: SOA architecture with the Enterprise Data Bus

    Here, ESB is responsible for proper invocation of services, which are implemented by utilizing EDB to access any enterprise data [1] which might be required by those services. This architecture provides the following advantages:

    • Explicit separation of concerns between implementation of the service functionality (business logic) and enterprise data access logic.
      Enterprise Data Bus effectively creates an abstraction layer, encapsulating details of enterprise data access and providing "standardized interfaces" to the services implementations.
    • EDB provides a single place for all of the transformations between enterprise semantic data models used by services [2] and data models of enterprise applications by encapsulating all of the access to enterprise data.
      As a result, service implementations can access enterprise data using a SOA semantic model, thus significantly simplifying the design and implementation of enterprise services.
    • Service implementations having access to the required enterprise data provided by the EDB allows for significantly simplified service interfaces and provides a looser coupling between service consumer and provider:
      • Because the service (consumer) can directly access data [2] , the service invocation, for example, does not require the real values of parameters (input/output) to be sent as part of a service invocation. As a result, the service interface can be expressed in terms of data references (keys) instead of real data.
      • While an enterprise service model will evolve as the SOA implementation matures, the changes in the data reference definitions will rarely change. As a result, service interfaces based on Key data are more stable.
      • Extending service implementations to use additional data can be implemented without impacting its consumers.

    Adding a Grid

    One of the possible implementations the EDB is the use of a data grid such as Websphere eXtreme Scale, Oracle Coherence Data Grid, GigaSpaces Data and Application Grid or NCache Distributed Data Grid.

    Data Grid is a software system designed for building solutions ranging from simple in-memory databases to powerful distributed coherent caches that scales to thousands of servers. A typical data grid implementation partitions data into non-overlapping chunks that are stored in memory across several machines. As a result, extremely high levels of performance and scalability can be achieved using standard processes. Performance is achieved through parallel execution of updates and queries (different parts of data can be accessed simultaneously on different machines) while scalability and fault tolerance can be achieved by replicating the same data on multiple machines.

    Figure 2 shows the use of a Grid as an EDB implementation. The Grid maintains an in-memory copy of the enterprise data, which represents the state of enterprise databases and applications.

    Figure 2 Grid as an EDB

    The introduction of Grid allows the repartitioning of data that exists in multiple databases and applications so that it adheres to the enterprise semantic model. This entails bringing together logically related data from different applications/databases in the enterprise into one cohesive data representation along with rationalizing the duplicate data which will inevitably exist in the enterprise.

    Grid implementations are typically supported by a publish/subscribe mechanism, allowing data changes to be synchronized between Grid memory and existing enterprise applications and data. A Grid-based intermediary allows for very fast access to the enterprise data using a data model optimized for such a service usage.

    Although Grid-based EDB (Figure 2) simplifies high speed access to the enterprise data, it still requires potentially significant data exchange between the EDB and the service implementation. A service must load all the required data, execute its processing and then store results back to the Grid.

    A better architecture is to bring execution of the services closer to the enterprise data; implement services as coordinators of the agents [7], which are executed in the memory space containing the enterprise data (Figure 3). A service implementation, in this case, receives a request and then starts one or more agents, which are then executed in the context of Grid nodes, returning the results to the service implementation, which then combines the results of agents' execution and returns the service's execution result.

    Figure 3 Service as an agent's coordinator

    This approach provides the following advantages over the Publish/Subscribe data exchange model:

    • It allows for local data manipulation that can significantly Improve overall service execution performance, especially when dealing with large amounts of data (megabytes or even gigabytes of data).
    • Similar to the data partitioning, the real execution is partitioned between multiple Grid nodes, thus further improving performance, scalability and availability of such an architecture.
    • Because all services can access the same data, when service execution involves purely the manipulation of data with minimal request/replies, there is no need to pass data at all.

    Software agents

    The concept of an agent can be traced back to the early days of research into Distributed Artificial Intelligence (DAI) which introduced the concept of a self-contained, interactive and concurrently-executing object. This object had some encapsulated internal state and could respond to messages from other similar objects. According to [7], "an agent is a component of software and/or hardware which is capable of acting exactingly in order to accomplish tasks on behalf of its user."

    There are several types of agents, as identified in [7]:

    • Collaborative agents
    • Interface agents
    • Mobile agents
    • Information/Internet agents
    • Reactive agents
    • Smart Agents

    Based on the architecture for the service implementation (Figure 3), we are talking about agents belonging to the multiple categories:

    • Collaborative - one or more agents together implement the service functionality.
    • Mobile - agents are executed on the Grid nodes outside of the service context
    • Information - agents' execution directly leverages data located in the Grid nodes.

    In the rest of the article we will discuss a simple implementation of Grid and a programming model that can be used for building a Grid-based EDB and an agent-based service implementation.

    Grid implementation

    Among the most difficult challenges in implementing Grid are High Availability and Scalability and data/execution partitioning mechanisms.

    One of the simplest ways to ensure Grid's High Availability and Scalability is the use of messaging for the internal Grid communications. Grid implementations can benefit from both point-to-point and publish-subscribe messaging:

    • Usage of messaging in point-to-point communications supports decoupling of Consumers from Providers. The request is not sent directly to the Provider, but rather to the queues monitored by the Provider(s). As a result, queuing provides for:
      • Transparently increasing the overall throughput by increasing the number of Grid nodes listening on the same queue.
      • Simple throttling of Grid nodes' load through controlling the number of threads listening on a queue.
      • Simplification of the load balancing. Instead of the consumer deciding which provider to invoke, it writes the request to the queue. Providers pick up requests as threads become available for request processing
      • Transparent failover support. If some of the processes listening on the same queue terminate, the rest will continue to pick up and process messages.
    • Usage of publish/subscribe messaging allows for the simplification of "broadcast" implementations within the Grid infrastructure. Such support can be extremely useful when synchronizing changes within a Grid configuration.

    Depending on the Grid implementation, data/execution partitioning approaches can range from pure load-balancing policies (in the case of identical nodes) to dynamic indexing of Grid data. This mechanism can be either hard-coded in the Grid implementation or externalized in a specialized Grid service-partition manager. The role of partition manager is to partition Grid data among nodes and serves as a "registry" for locating nodes (nodes queues) for routing requests. Externalization of a partition manager in a separate service introduces additional flexibility into an overall architecture through the use of a "pluggable" partition manager implementation or even multiple partition managers, implementing different routing mechanisms for different types of requests.

    The overall Grid infrastructure, including partition manager and Grid nodes communication can be either directly exposed to the Grid consumer in the form of APIs, used as part of a Grid request submission or encapsulated in a set of specialized Grid nodes - Grid masters (controllers). In the first case, a specialized Grid library responsible for implementation of request distribution and (optionally) combination of replies has to be linked to a Grid consumer implementation. Although this option can, theoretically, provide the best possible overall performance, it typically creates a tighter coupling between Grid implementation and its consumers [3]. In the second case, Grid master implements a façade pattern [8] for the Grid with all advantages of this pattern - complete encapsulation of Grid functionality (and infrastructure) from the Grid consumer. Although implementation of Grid master adds an additional networking hop (and consequently some performance overhead), the loose coupling achieved is typically more important.

    Overall high level Grid architecture supporting two level - master/nodes implementation is presented at Figure 4.

    Figure 4 Grid High Level Architecture

    In addition to components, described above, proposed architecture (Figure 4) contains two additional ones - Grid administrator and code repository.

    GRID administrator provides a graphical interface, showing currently running nodes, their load, memory utilization, supported data, etc.

    Because restarting of Grid nodes/master can be fairly expensive [4] we need to be able to introduce new code into Grid master/nodes without restarting them. This is done through usage of code repository - currently implemented as Web accessible jars collection. As developers implement new code that they want to run in Grid environment, they can store their code in a repository and dynamically load/invoke it (using Java URLClassLoader) as part of their execution (see below).

    Programming model

    In order to simplify the creation of applications running on the Grid we have designed a job-items programming model (Figure 5) for execution on the Grid. This model is a variation of Map/Reduce [9] pattern and works as follows:

    Figure 5 Job Items model

    1. Grid consumer submits job request (in the form of job container) to the Grid master. Job container provides all of the necessary information for the master environment to instantiate the job. This includes job's code location (location of the java jar, or empty string, interpreted as local, statically linked code), job's starting class, job's input data and job's context type, allowing the choice between multiple partitions managers for splitting job execution.
    2. Grid master's runtime instantiates job's class passing to it the appropriate job context (partition manager), and replier object, supporting replies back to the consumer. Once the job object is created, runtime starts its execution.
    3. Job's start execution method uses partition manager to split the job into items, each of which is send to a particular Grid node for execution - map step.
    4. Each destination Grid node receives an item execution request (in the form of item container). The Item container is similar to the job container and provides sufficient information for the Grid node to instantiate and execute item. This includes item's code location, item's starting class; item's input data and item's context type.
    5. Grid node's runtime instantiates an item's class, passing to it the appropriate item context and replier object, supporting replies back to the job. Once the item object is created, runtime starts its execution.
    6. The Item's execution uses a reply object to send partial results back to the job. This allows job implementation to start processing an item's partial results (reduce steps) as soon as they become available. If necessary, additional items can be created and sent to the Grid nodes during this processing.
    7. The Job can use a replier to send its partial results to the consumer as they become available

    The overall execution looks as follows (Figure 6)

    Figure 6 Job Items execution

    Detailed execution for both Grid master and node is presented at Figure 7

    Figure 7 Execution details

    In addition to implementing Map/Reduce pattern, this programming model provides support for fully asynchronous data delivery on all levels. This not only allows significantly improved overall performance when job consumers can use partial replies, (For example: delivering partial information to the browser) but also Improve the scalability and throughput of the overall system by limiting the size of the messages (message chunking) [5].

    Interfacing Grid

    Use of a job container as a mechanism for job invocation also allows a standardized interface for submitting jobs to the Grid [6] (Figure 8). We are providing 2 functionally identical methods for this web service interface - invokeJobRaw and invokeJobXML.

    Figure 8 GridJobService WSDL

    Both methods allow invocation of the job on the Grid. The first approach uses MTOM to pass a binary-serialized JobContainer class, while the second one support XML marshalling of all elements of the JobContainer (Figure 5). In addition to the JobContainer, both methods pass two additional parameters to the Grid:

    • Request handle allowing to uniquely identify request and is used by consumer to match replies to a request (see later)
    • Reply URL - a URL at which consumer is listening for reply. This URL should expose GridJobServiceReplies service (Figure 9)

    Figure 9 Grid Job Service Reply WSDL

    Implementation of Grid master

    The class diagram for Grid master is presented in Figure 10. In addition to implementing the basic job runtime described above, the Master's software also implements basic framework features including threading [7], request/response matching, requests timeout, etc.

    In order to support request/multiple replies paradigm for items execution, instead of using "get Replies with wait" (a common request/reply pattern when using messaging), we decided to use a single listener and build our own reply matching mechanism. Finally, we have implemented a timeout mechanism, ensuring that the job is getting the "first" reply from every item within a predefined data interval (defined in the job container).

    Figure 10 Grid master implementation

    Implementation of Grid node

    The class diagram for Grid node is presented at Figure 11. Similar to the master runtime, here we complement basic item's execution with the framework support including threading, execution timeout, etc.

    Figure 11 Grid node implementation

    To avoid stranding of node resources by items running forever, we have implemented items eviction policy, based on the item's execution time. An execution of an item, running longer then the time advertised by it (in the item container), will be terminated and timeout exception will be sent back to the job.

    Grid consumer framework

    We have also developed a consumer implementation, wrapping Web services (Figure 8, Figure 9) with a simple Java APIs (Figure 12) It leverages embedded Jetty Web server and allows to submit a job request to a Grid and register a callback for receiving replies.

    Figure 12 Grid consumer


    Introduction of the EDB allows architects to further simplify SOA implementation by introducing "standardized" access from services implementation to the enterprise data. It simplifies both service invocation and execution models and provides for further decoupling of services. The use of Grid for EDB implementations supports the EDB's scalability and high availability. Finally, use of service agents executing directly in the Grid further improves scalability and performance. Grid's high level architecture and programming model, described in this article, provides a simple yet robust foundation for such implementations.


    Many thanks to my coworkers in Navteq, especially Michael Frey, Daniel Rolf and Jeffrey Herr for discussions and help in Grid and its programming model implementation.


    1. B. Lublinsky.">Incorporating Enterprise Data into SOA. November 2006, InfoQ.

    2. Mike Rosen, Boris Lublinsky, Kevin Smith, Mark Balcer. Applied SOA: : Service-Oriented Architecture and Design Strategies. Wiley 2008, ISBN: 978-0-470-22365-9.

    3. Art Sedighi. Enterprise SOA Meets Grid. June 2006.

    4. David Chappell and David Berry. SOA - Ready for Primetime: The Next-Generation, Grid-Enabled Service-Oriented Architecture.A SOA Magazine, September 2007.

    5. David Chappell. Next Generation Grid Enabled SOA.

    6. Data grid

    7. Hyacinth S. Nwana. Software Agents: An Overview

    8. Façade pattern

    9. Map Reduce.

    Wed, 06 Jul 2022 12:00:00 -0500 en text/html
    Killexams : Get started
  • What makes an effective microservices logging strategy?

    System size and scale play a big role in microservices logging. Follow these best practices to develop a solid logging strategy within a microservices architecture. Continue Reading

  • How is asynchronous microservices tracing best accomplished?

    How can you trace a tricky workflow in an asynchronous microservices-oriented architecture? Two options include correlation IDs and distributed tracing tools. Continue Reading

  • What are the most essential microservice design principles?

    Don't hinder a microservices architecture because of a faulty design. Keep these five design principles in mind to build the proper components for your microservices architecture. Continue Reading

  • 10 microservices quiz questions to test your knowledge

    Don't sweat the details with microservices. Take this 10-question quiz to boost your microservices knowledge and impress interviewers during a job hunt. Continue Reading

  • RESTful parameters antipattern considerations for queries, paths

    Choose carefully for path and query parameters in URL design. Lackluster choices in the design phase can plague client resource access down the road. Continue Reading

  • Step-by-step RESTful web service example in Java using Eclipse and TomEE Plus

    In this easy-to-follow JAX-RS tutorial, we provide a RESTful web service example in Java using Eclipse and TomEE Plus, where we go from development to testing in less than 15 minutes.Continue Reading

  • Java developer tutorials a popular destination for 2018 readers

    What were readers interested in during 2018? Java developer tutorials topped the list. Learn how to integrate RESTful APIs, GIT, and Jenkins CI tools.Continue Reading

  • Step-by-step Spring Boot RESTful web services example in Java using STS

    In a previous tutorial, we explained the basic tenets of good RESTful web service design. In this step-by-step Spring Boot restful web services example, we implement it.Continue Reading

  • RESTful APIs tutorial: Learn key web service design principles

    RESTful Java API designs shouldn't be hard to get right. This RESTful APIs tutorial shows core RESTful principles concerning URL structure and the effective use of HTTP methods.Continue Reading

  • SOAP web services bottom-up approach example in Java using Eclipse

    It's easy to create a web service from a JavaBean. This SOAP web services bottom-up approach example in Java using Eclipse and Apache Axis will prove it.Continue Reading

  • Top-down web service creation example in Java using Eclipse

    Creating a SOAP web service in Eclipse is easy if you have a WSDL file. This top-down web service approach example in Java using Eclipse tutorial shows how.Continue Reading

  • Tue, 08 May 2018 04:15:00 -0500 en text/html
    Killexams : How Mobile Technology Will Dominate the Future of Business No result found, try new keyword!Of course, I know how to drink wine, and I understand the basics of winemaking ... for mobile device management (MDM), service-oriented architecture (SOA), API library, mobile middleware, and ... Thu, 04 Aug 2022 12:00:00 -0500 en-us text/html Killexams : Java Development Definitions
  • A

    abstract class

    In Java and other object oriented programming (OOP) languages, objects and classes (categories of objects) may be abstracted, which means that they are summarized into characteristics that are relevant to the current program’s operation.

  • AJAX (Asynchronous JavaScript and XML)

    AJAX (Asynchronous JavaScript and XML) is a technique aimed at creating better and faster interactive web apps by combining several programming tools including JavaScript, dynamic HTML (DHTML) and Extensible Markup Language (XML).

  • Apache Camel

    Apache Camel is a Java-based framework that implements messaging patterns in Enterprise Integration Patterns (EIP) to provide a rule-based routing and mediation engine enterprise application integration (EAI).

  • Apache Solr

    Apache Solr is an open source search platform built upon a Java library called Lucene.

  • AWS SDK for Java

    The AWS SDK for Java is a collection of tools for developers creating Java-based Web apps to run on Amazon cloud components such as Amazon Simple Storage Service (S3), Amazon Elastic Compute Cloud (EC2) and Amazon SimpleDB.

  • AWS SDK for JavaScript

    The AWS SDK for JavaScript is a collection of software tools for the creation of applications and libraries that use Amazon Web Services (AWS) resources.

  • B

    bitwise operator

    Because they allow greater precision and require fewer resources, bitwise operators, which manipulate individual bits, can make some code faster and more efficient. Applications of bitwise operations include encryption, compression, graphics, communications over ports/sockets, embedded systems programming and finite state machines.

  • C


    Compositing used to create layered images and video in advertisements, memes and other content for print publications, websites and apps. Compositing techniques are also used in video game development, augmented reality and virtual reality.

  • const

    Const (constant) in programming is a keyword that defines a variable or pointer as unchangeable.

  • CSS (cascading style sheets)

    This definition explains the meaning of cascading style sheets (CSS) and how using them with HTML pages is a user interface (UI) development best practice that complies with the separation of concerns design pattern.

  • E

    embedded Tomcat

    An embedded Tomcat server consists of a single Java web application along with a full Tomcat server distribution, packaged together and compressed into a single JAR, WAR or ZIP file.

  • EmbeddedJava

    EmbeddedJava is Sun Microsystems' software development platform for dedicated-purpose devices with embedded systems, such as products designed for the automotive, telecommunication, and industrial device markets.

  • encapsulation in Java

    Java offers four different "scope" realms--public, protected, private, and package--that can be used to selectively hide data constructs. To achieve encapsulation, the programmer declares the class variables as “private” and then provides what are called public “setter and getter” methods which make it possible to view and modify the variables.

  • Enterprise JavaBeans (EJB)

    Enterprise JavaBeans (EJB) is an architecture for setting up program components, written in the Java programming language, that run in the server parts of a computer network that uses the client/server model.

  • exception handler

    In Java, checked exceptions are found when the code is compiled; for the most part, the program should be able to recover from these. Exception handlers are coded to define what the program should do under specified conditions.

  • F

    full-stack developer

    A full-stack developer is a type of programmer that has a functional knowledge of all techniques, languages and systems engineering concepts required in software development.

  • G

    git stash

    Git stash is a built-in command with the distributed version control tool in Git that locally stores all the most recent changes in a workspace and resets the state of the workspace to the prior commit state.

  • GraalVM

    GraalVM is a tool for developers to write and execute Java code.

  • Groovy

    Groovy is a dynamic object-oriented programming language for the Java virtual machine (JVM) that can be used anywhere Java is used.

  • GWT (GWT Web Toolkit)

    The GWT software development kit facilitates the creation of complex browser-based Java applications that can be deployed as JavaScript, for portability across browsers, devices and platforms.

  • H


    Hibernate is an open source object relational mapping (ORM) tool that provides a framework to map object-oriented domain models to relational databases for web applications.

  • HTML (Hypertext Markup Language)

    HTML (Hypertext Markup Language) is a text-based approach to describing how content contained within an HTML file is structured.

  • I


    InstallAnywhere is a program that can used by software developers to package a product written in Java so that it can be installed on any major operating system.

  • IntellJ IDEA

    The free and open source IntellJ IDEA includes JUnit and TestNG, code inspections, code completion, support for multiple refactoring, Maven and Ant build tools, a visual GUI (graphical user interface) builder and a code editor for XML as well as Java. The commercial version, Ultimate Edition, provides more features.

  • inversion of control (IoC)

    Inversion of control, also known as the Hollywood Principle, changes the control flow of an application and allows developers to sidestep some typical configuration hassles.

  • J

    J2ME (Java 2 Platform, Micro Edition)

    J2ME (Java 2 Platform, Micro Edition) is a technology that allows programmers to use the Java programming language and related tools to develop programs for mobile wireless information devices such as cellular phones and personal digital assistants (PDAs).

  • JAR file (Java Archive)

    A Java Archive, or JAR file, contains all of the various components that make up a self-contained, executable Java application, deployable Java applet or, most commonly, a Java library to which any Java Runtime Environment can link.

  • Java

    Java is a widely used programming language expressly designed for use in the distributed environment of the internet.

  • Java abstract class

    In Java and other object oriented programming (OOP) languages, objects and classes may be abstracted, which means that they are summarized into characteristics that are relevant to the current program’s operation.

  • Java annotations

    Within the Java development kit (JDK), there are simple annotations used to make comments on code, as well as meta-annotations that can be used to create annotations within annotation-type declarations.

  • Java assert

    The Java assert is a mechanism used primarily in nonproduction environments to test for extraordinary conditions that will never be encountered unless a bug exists somewhere in the code.

  • Java Authentication and Authorization Service (JAAS)

    The Java Authentication and Authorization Service (JAAS) is a set of application program interfaces (APIs) that can determine the identity of a user or computer attempting to run Java code, and ensure that the entity has the privilege or permission to execute the functions requested... (Continued)

  • Java BufferedReader

    Java BufferedReader is a public Java class that allows large volumes to be read from disk and copied to much faster RAM to increase performance over the multiple network communications or disk reads done with each read command otherwise

  • Java Business Integration (JBI)

    Java Business Integration (JBI) is a specification that defines an approach to implementing a service-oriented architecture (SOA), the underlying structure supporting Web service communications on behalf of computing entities such as application programs or human users... (Continued)

  • Java Card

    Java Card is an open standard from Sun Microsystems for a smart card development platform.

  • Java Champion

    The Java Champion designation is awarded to leaders and visionaries in the Java technology community.

  • Java chip

    The Java chip is a microchip that, when included in or added to a computer, will accelerate the performance of Java programs (including the applets that are sometimes included with Web pages).

  • Java Comparator

    Java Comparator can compare objects to return an integer based on a positive, equal or negative comparison. Since it is not limited to comparing numbers, Java Comparator can be set up to order lists alphabetically or numerically.

  • Java compiler

    Generally, Java compilers are run and pointed to a programmer’s code in a text file to produce a class file for use by the Java virtual machine (JVM) on different platforms. Jikes, for example, is an open source compiler that works in this way.

  • Java Cryptography Extension (JCE)

    The Java Cryptography Extension (JCE) is an application program interface (API) that provides a uniform framework for the implementation of security features in Java.

  • Java Data Objects (JDO)

    Java Data Objects (JDO) is an application program interface (API) that enables a Java programmer to access a database implicitly - that is, without having to make explicit Structured Query Language (SQL) statements.

  • Java Database Connectivity (JDBC)

    Java Database Connectivity (JDBC) is an API packaged with the Java SE edition that makes it possible to connect from a Java Runtime Environment (JRE) to external, relational database systems.

  • Java Development Kit (JDK)

    The Java Development Kit (JDK) provides the foundation upon which all applications that are targeted toward the Java platform are built.

  • Java Flight Recorder

    Java Flight Recorder is a Java Virtual Machine (JVM) profiler that gathers performance metrics without placing a significant load on resources.

  • Java Foundation Classes (JFC)

    Using the Java programming language, Java Foundation Classes (JFC) are pre-written code in the form of class libraries (coded routines) that give the programmer a comprehensive set of graphical user interface (GUI) routines to use.

  • Java IDE

    Java IDEs typically provide language-specific features in addition to the code editor, compiler and debugger generally found in all IDEs. Those elements may include Ant and Maven build tools and TestNG and JUnit testing.

  • Java keyword

    Java keywords are terms that have special meaning in Java programming and cannot be used as identifiers for variables, classes or other elements within a Java program.

  • Java Message Service (JMS)

    Java Message Service (JMS) is an application program interface (API) from Sun Microsystems that supports the formal communication known as messaging between computers in a network.

  • Java Mission Control

    Java Mission Control is a performance-analysis tool that renders sampled JVM metrics in easy-to-understand graphs, tables, histograms, lists and charts.

  • Java Platform, Enterprise Edition (Java EE)

    The Java Platform, Enterprise Edition (Java EE) is a collection of Java APIs owned by Oracle that software developers can use to write server-side applications. It was formerly known as Java 2 Platform, Enterprise Edition, or J2EE.

  • Java Runtime Environment (JRE)

    The Java Runtime Environment (JRE), also known as Java Runtime, is the part of the Java Development Kit (JDK) that contains and orchestrates the set of tools and minimum requirements for executing a Java application.

  • Java Server Page (JSP)

    Java Server Page (JSP) is a technology for controlling the content or appearance of Web pages through the use of servlets, small programs that are specified in the Web page and run on the Web server to modify the Web page before it is sent to the user who requested it.

  • Java string

    Strings, in Java, are immutable sequences of Unicode characters. Strings are objects in Java and the string class enables their creation and manipulation.

  • Java virtual machine (JVM)

    A Java virtual machine (JVM), an implementation of the Java Virtual Machine Specification, interprets compiled Java binary code (called bytecode) for a computer's processor (or "hardware platform") so that it can perform a Java program's instructions.


    JAVA_HOME is an operating system (OS) environment variable which can optionally be set after either the Java Development Kit (JDK) or the Java Runtime Environment (JRE) is installed.

  • JavaBeans

    JavaBeans is an object-oriented programming interface from Sun Microsystems that lets you build re-useable applications or program building blocks called components that can be deployed in a network on any major operating system platform.

  • JavaFX

    JavaFX is a software development platform for the creation of both desktop aps and rich internet applications (RIAs) that can run on various devices. The name is a short way of typing "Java Effects."

  • JavaScript

    JavaScript is a programming language that started off simply as a mechanism to add logic and interactivity to an otherwise static Netscape browser.

  • JAX-WS (Java API for XML Web Services)

    Java API for XML Web Services (JAX-WS) is one of a set of Java technologies used to develop Web services... (Continued)

  • JBoss

    JBoss is a division of Red Hat that provides support for the JBoss open source application server program and related middleware services marketed under the JBoss Enterprise Middleware brand.

  • JDBC Connector (Java Database Connectivity Connector)

    The JDBC (Java Database Connectivity) Connector is a program that enables various databases to be accessed by Java application servers that are run on the Java 2 Platform, Enterprise Edition (J2EE) from Sun Microsystems.

  • JDBC driver

    A JDBC driver (Java Database Connectivity driver) is a small piece of software that allows JDBC to connect to different databases. Once loaded, a JDBC driver connects to a database by providing a specifically formatted URL that includes the port number, the machine and database names.

  • JHTML (Java within Hypertext Markup Language)

    JHTML (Java within Hypertext Markup Language) is a standard for including a Java program as part of a Web page (a page written using the Hypertext Markup Language, or HTML).

  • Jikes

    Jikes is an open source Java compiler from IBM that adheres strictly to the Java specification and promises an "extremely fast" compilation.

  • JMX (Java Management Extensions)

    JMX (Java Management Extensions) is a set of specifications for application and network management in the J2EE development and application environment.

  • JNDI (Java Naming and Directory Interface)

    JNDI (Java Naming and Directory Interface) enables Java platform-based applications to access multiple naming and directory services.

  • JOLAP (Java Online Analytical Processing)

    JOLAP (Java Online Analytical Processing) is a Java application-programming interface (API) for the Java 2 Platform, Enterprise Edition (J2EE) environment that supports the creation, storage, access, and management of data in an online analytical processing (OLAP) application.

  • jQuery

    jQuery is an open-sourced JavaScript library that simplifies creation and navigation of web applications.

  • JRun

    JRun is an application server from Macromedia that is based on Sun Microsystems' Java 2 Platform, Enterprise Edition (J2EE).

  • JSON (Javascript Object Notation)

    JSON (JS Object Notation) is a text-based, human-readable data interchange format used for representing simple data structures and objects in Web browser-based code. JSON is also sometimes used in desktop and server-side programming environments. (Continued....)

  • JTAPI (Java Telephony Application Programming Interface)

    JTAPI (Java Telephony Application Programming Interface) is a Java-based application programming interface (API) for computer telephony applications.

  • just-in-time compiler (JIT)

    A just-in-time (JIT) compiler is a program that turns bytecode into instructions that can be sent directly to a computer's processor (CPU).

  • Jython

    Jython is an open source implementation of the Python programming language, integrated with the Java platform.

  • K

    Kebab case

    Kebab case -- or kebab-case -- is a programming variable naming convention where a developer replaces the spaces between words with a dash.

  • M

    MBean (managed bean)

    In the Java programming language, an MBean (managed bean) is a Java object that represents a manageable resource, such as an application, a service, a component, or a device.

  • Morphis

    Morphis is a Java -based open source wireless transcoding platform from Kargo, Inc.

  • N


    NetBeans is a Java-based integrated development environment (IDE). The term also refers to the IDE’s underlying application platform framework. 

  • O

    object-relational mapping (ORM)

    Object-relational mapping (ORM) is a mechanism that makes it possible to address, access and manipulate objects without having to consider how those objects relate to their data sources...(Continued)

  • Open Service Gateway Initiative (OSGi)

    OSGi (Open Service Gateway Initiative) is an industry plan for a standard way to connect devices such as home appliances and security systems to the Internet.

  • OpenJDK

    OpenJDK is a free, open-source version of the Java Development Kit for the Java Platform, Standard Edition (Java SE).

  • P

    Pascal case

    Pascal case is a naming convention in which developers start each new word in a variable with an uppercase letter.

  • prettyprint

    Prettyprint is the process of converting and presenting source code or other objects in a legible and attractive way.

  • R

    Remote Method Invocation (RMI)

    RMI (Remote Method Invocation) is a way that a programmer, using the Java programming language and development environment, can write object-oriented programming in which objects on different computers can interact in a distributed network.

  • S

    Snake case

    Snake case is a naming convention where a developer replaces spaces between words with an underscore.

  • SQLJ

    SQLJ is a set of programming extensions that allow a programmer using the Java programming language to embed statements that provide SQL (Structured Query Language) database requests.

  • Sun Microsystems

    Sun Microsystems (often just called "Sun"), the leading company in computers used as Web servers, also makes servers designed for use as engineering workstations, data storage products, and related software.

  • T


    Tomcat is an application server from the Apache Software Foundation that executes Java servlets and renders Web pages that include Java Server Page coding.

  • X

    XAML (Extensible Application Markup Language)

    XAML, Extensible Application Markup Language, is Microsoft's XML-based language for creating a rich GUI, or graphical user interface. XAML supports both vector and bitmap types of graphics, as well as rich text and multimedia files.

  • Wed, 13 Jul 2022 16:56:00 -0500 en text/html
    Killexams : Guest View: How to set your microservices up for failure

    If you’ve been developing software for more than five years, you’ve probably seen this cycle before: New architecture emerges. Developers abandon previous best practices. Developers eventually realize that some of those “old-school” practices apply to the new architectural approach.

    Microservices are no exception.

    Some developers see microservices as a way to throw out established service-oriented architecture (SOA) approaches. However, in doing so, they are bringing back some of the complexity and redundancy that plagued us 15 to 20 years ago, which we have worked so hard to remove.

    (Related: Microservices force companies to innovate)

    Here are four particularly surefire ways to set your microservices up for failure. And for those who would like to avoid the headaches of the past, included are some modifications for reaping the benefits of microservices while keeping your software architecture agile, reliable, streamlined and high performance.

    Skip deploying a central hub. The idea of adding a mediation gateway or enterprise service bus-like functionality into microservices themselves may be appealing, but it’s a certain strategy for adding layers of complexity and redundancy to your architecture. After all, most organizations will need microservices to connect both internally with their existing systems and externally with those of their partners. What could be more fun than having to modify a microservice every time one of those systems changes? Then there’s the added bonus of compromising security when services get pushed via APIs to the edge of the network for external access.

    The smarter way to think of a microservices architecture is as a layered system in which the microservices interact with a mediation engine or ESB-like central hub to handle mediation, transformation, and other integration functions. There are several benefits to this approach: Changes to microservices will be minimal if most of them can occur in the mediation layer, and your architecture will be more agile if a change can occur once in the mediation layer and then be applied to all applicable microservices. Using an enterprise service bus (ESB) or similar mediation engine provides the necessary connectivity to incorporate legacy data and services into microservices, as well as for conducting a gradual migration. Equally important, a mediation layer allows the organization to maintain its own standards while partners in the ecosystem maintain theirs.

    Ignore your dependencies. Not documenting what a microservice requires and depends on is the perfect way to complicate your composition scenarios and create chaos if a microservice crashes.

    On the other hand, dependency management not only streamlines service composition, it also helps to wire microservices into a service broker or hub. This can be accomplished by having a service registry catalog everything to provide a holistic view and then have microservices communicate using a microservice architecture broker—essentially applying basic SOA concepts.

    If there are dependencies that need to be managed, the registry-and-broker approach provides a mechanism for analyzing how other microservices will be affected. For instance, it can help to understand what will happen to dependent microservices if a particular microservice crashes, providing developers with the insights to build in recovery procedures. Similarly, this approach offers a way to address the versioning of microservices. Moreover, APIs for microservices with dependencies can be cataloged in the service broker to help ensure that they also continue to perform as needed.

    APIs go hand-in-hand with microservices, so here is something else to consider.

    Gloss over your APIs when configuring apps. Doing so will allow you to recreate the experience of poorly defined, monolithic services with zero communication. However, if you want to avoid going full retro, make sure your microservices architecture includes the creation of REST APIs that let you tailor application logic and resource models.

    Stick to the fundamentals by implementing the microservice first and then exposing it via an API, which serves as the interface to other systems and services. The API, not the microservice, should be consumable. Typically APIs will be built using a REST approach, the popular, developer-friendly architecture for apps. However, be prepared to have APIs support other standards, such as the OASIS Message Queuing Telemetry Transport protocol, which offers lightweight messaging protocol widely adopted for connecting IoT devices, sensors and gateways.

    Embed failover scenarios into your microservices code. Last but hardly least, isn’t assured uptime worth the added development and management complexity? Honestly, it’s not. Software architects and developers should only have to worry about the microservice itself, and not how scalability, availability and performance will be managed. Those should be managed by an underlying platform that also handles error detection, handling and prediction.

    The advantage of a platform that supports containers is, when it detects a situation, it can quickly spin up another runtime instance of the microservice. Additionally, an analytical layer can capture data on the system layer in order to provide information at the container level to make decisions. In doing so, real-time and predictive notifications can enable containers and container-management systems to take the necessary actions before a failure occurs.

    Microservices provide an effective way to break applications into smaller modules. But the best microservices are not islands. Instead, take advantage of common underlying functionality for mediation, governance, API management, and runtime reliability and performance. Equally important, they allow developers and software architects to focus on a microservice’s functionality instead of wasting cycles on how to keep it up to date and running.

    Tue, 28 Jun 2016 06:37:00 -0500 en-US text/html
    Killexams : What is cloud computing? Everything you need to know now

    Cloud computing is an abstraction of compute, storage, and network infrastructure assembled as a platform on which applications and systems can be deployed quickly and scaled on the fly. Crucial to cloud computing is self-service: Users can simply fill in a web form and get up and running.

    The vast majority of cloud customers consume public cloud computing services over the internet, which are hosted in large, remote data centers maintained by cloud providers. The most common type of cloud computing, SaaS (software as service), delivers prebuilt applications to the browsers of customers who pay per seat or by usage, exemplified by such popular apps as Salesforce, Google Docs, or Microsoft Teams. Next in line is IaaS (infrastructure as a service), which offers vast, virtualized compute, storage, and network infrastructure upon which customers build their own applications, often with the aid of providers’ API-accessible services.

    When people casually say “the cloud,” they most often mean the big IaaS providers: AWS (Amazon Web Services), Google Cloud, or Microsoft Azure. All three have become gargantuan ecosystems of services that go way beyond infrastructure: developer tools, serverless computing, machine learning services and APIs, data warehouses, and thousands of other services. With both SaaS and IaaS, a key benefit is agility. Customers gain new capabilities almost instantly without capital investment in hardware or software—and they can instantly scale the cloud resources they consume up or down as needed.

    Cloud computing definitions for each type

    Way back in 2011, NIST posted a PDF that divided cloud computing into three “service models”—SaaS, IaaS, and PaaS (platform as a service)—the latter a controlled environment within which customers develop and run applications. These three categories have largely stood the test of time, although most PaaS solutions now make themselves available as services within IaaS ecosystems rather than presenting themselves as their own clouds.

    Two evolutionary trends stand out since NIST’s threefold definition. One is the long and growing list of subcategories within SaaS, IaaS, and PaaS, some of which blur the lines between categories. The other is the explosion of API-accessible services available in the cloud, particularly within IaaS ecosystems. The cloud has become a crucible of innovation where many emerging technologies appear first as services, a big attraction for business customers who understand the potential competitive advantages of early adoption.

    SaaS (software as a service) definition

    This type of cloud computing delivers applications over the internet, typically with a browser-based user interface. Today, the vast majority of software companies offer their wares via SaaS—if not exclusively, then at least as an option.