Save money, download C90-06A test prep free of cost gives the most current and up to be able to date Practice Tests out with Actual C90-06A sample test and even Answers achievable subject areas of SOA C90-06A Exam. Training our C90-06A braindumps and Responses to Improve your current understanding and go your Cloud Architecture Lab test using High Marks. Many of us guarantee your accomplishment in the Test out Center, covering all the points of C90-06A test out and creating your current Knowledge of typically the C90-06A exam. Pass using our actual C90-06A inquiries.

Exam Code: C90-06A Practice exam 2022 by team
Cloud Architecture Lab
SOA Architecture resources
Killexams : SOA Architecture resources - BingNews Search results Killexams : SOA Architecture resources - BingNews Killexams : XaaS isn’t everything — and it isn’t serviceable

XaaS is, regrettably, defined as, “any computing service that is delivered via the internet and paid for in a flexible consumption model rather than as an upfront purchase or license.”

Do some Googling about XaaS and you’ll find much repetitive gushing, but for the more jaundiced among us, it’s hard to avoid concluding that XaaS is, in fact, little more than the intersection of cloud-based computing and charge-backs.

And yet in all of the discussion, that XaaS is the logical outcome of services-oriented architecture (SOA) seems to have been ignored.

Also strange is that XaaS excludes the part of “everything” that the uninitiated might think of as the most important set of services IT provides, namely, everything business analysts, IT internal consultants, and application development and support staff do for a living.

Which I guess means “Everything” as a Service is really “A Few Things as a Service” (AFTaaS), or maybe “Everything Except Effort as a Service” (EEEaaS).

What XaaS should really mean

What XaaS ought to refer to is the logical application of SOA principles to just about everything that gets done in the enterprise.

It should, for example, include what services firms call business process outsourcing (BPO). It should also include what we might dub “business process insourcing” but usually call “shared services.”

Work as a Service (WaaS), anyone?

XaaS should, that is, include not only the technology itself but also the business results the technology supports.

But not who pays, and how. Architecture is about how solutions are put together, not about financing them.

‘Work as a Service’: Shared services as architecture

Here’s the thing: BPO isn’t new, and has made paying by the drink for work an option since its inception.

And just as IT can provide SaaS to its business users either by way of commercial cloud or in-house provisioned applications, so business functions can make their services available to the rest of the business by way of organizing as in-house provisioned shared services, or through use of a BPO vendor.

But a shared services group isn’t just like a BPO only internal. The difference between them? Contracting with a BPO provider isn’t an architectural decision. Organizing as an internal shared service most assuredly is.

Like most other outsources, the decision to engage a BPO provider is usually an admission of management failure. It hands off responsibility for a business function that internal management couldn’t properly oversee to a contract.

This doesn’t always mean organizing as a collection of shared services is the right choice, though.

Among the downsides: An in-house shared-services business architecture, unlike a BPO, has, when carried to its logical conclusion a reductio ad absurdum outcome, where every business department charges every other business department for the services it provides. For example, IT might charge HR a monthly fee for use of the HRIS, while HR might reciprocate by charging IT for recruiting, benefits management, and payroll services.

Ubiquitous shared-services can turn the enterprise into a giant financial ouroboros.

Business services oriented architecture: One size fits no one

BPOs and XaaS do share a characteristic that might, in some situations, be a benefit but in most cases is a limitation, namely, the need to commoditize. This requirement isn’t a matter of IT’s preference for simplification, either. It’s driven by business architecture’s decision-makers’ preference for standardizing processes and practices across the board.

This might not seem to be an onerous choice, but it can be. Providing a service that operates the same way to all comers no matter their specific and unique needs might cut immediate costs but can be, in the long run, crippling.

Imagine, for example, that Human Resources embraces the Business Services Oriented Architecture approach, offering up Human Resources as a Service to its internal customers. As part of HRaaS it provides Recruiting as a Service (RaaS). And to make the case for this transformation it extols the virtues of process standardization to reduce costs.

Imagine, then, that you’re responsible for Store Operations for a highly seasonal retailer, one that has to ramp up its in-store staffing from Black Friday through Boxing Day. Also imagine IT needs to recruit a DBA.

I trust it’s clear the same process won’t work for both recruiting store staff by the hundreds and for hiring a single, highly specialized technical professional.

“Standardize” is easy to say but hard to make work right. And that’s before the HR manager responsible for recruiting tries to explain what they need the HRIS to do.

In this, what we might call a business-services-oriented architecture isn’t that different from adopting SOA (along with microservices, its teeny brethren), for your application architecture. In both cases, enforcing standardization on a single version is one-size-fits-no-one engineering.

Wed, 27 Jul 2022 22:00:00 -0500 Author: Bob Lewis en-US text/html
Killexams : Book Review: Applied SOA

Applied SOA is a new book on Service Oriented Architecture written by 4 leading SOA practitioners: Michael Rosen, Boris Lublinsky, Kevin Smith and Marc Balcer. This book is a handbook that aims at making you successful with you SOA implementation. We asked a few questions to the authors before reviewing the book.

In addition to our review, InfoQ was able to obtain a demo chapter. Chapter 3: Getting Started with SOA can be downloaded here.

InfoQ: What has been the major hurdles people encountered in their SOA?

Boris: I think that majority of people are more interested in the SOA technologies, aka Web services, rather then in business modeling and decomposition -i.e. the SOA foundation. For example, a typical SOA debate is Rest vs SOAP, not why do we need to use services. Do not get me wrong, technology debates are important and I enjoy them, but as a result, in today's reality JBOWS (Just a Bunch Of Web Services) rules. That is why, in the book we deliberately were staying away from these and other implementation debates and were trying to address the heart of SOA - its architectural underpinnings. 

Kevin: We have painfully experienced that the lack of planning and governance has led to many chaotic SOA implementations. As Boris mentioned, many times, JBOWS will quickly get deployed without any focus on planning, semantic interoperability or enterprise data standards, and this makes integration difficult as new business partners eventually want to use these services. When services are deployed without plans related to service change management, enterprise management and real-time operations gets to be quite difficult and painful. One of the reasons that we wanted to write this book is to provide guidance based on the lessons that we have learned in this area, focusing on practical implementation – including planning, management, and governance.

Another hurdle that we have seen is security. Real business solutions demand real solutions for security, and this is sometimes underestimated or overlooked in new SOA projects – with dire consequences. The “alphabet soup” of overlapping (and sometimes competing) standards, specifications and technologies used for securing SOA can be overwhelming, and most security standards for SOA include various options, each demanding an in-depth knowledge of the fundamentals of information security. We have found that some of the major challenges for businesses are identity and attribute federation between business partners, the subtleties of identity and attribute propagation, tradeoffs between security approaches and performance and availability, and access control policy management and enforcement – just to name a few. The questions that we see people asking are, “How do we get started?”, “What are our options for building security into our SOA?”, “How do we balance security and performance?” and “How do we actually apply the standards?”  We are answering these questions in our book, providing a practical guide for SOA security, with solution blueprints and decision flows to help this decision-making process easier.   

Michael: As with most new technology solutions, the challenges are not technical but rather are around motivating and managing the structural changes required to effectively utilized the new technologies. Anyone can build a ervice. The challenge is building the right services, and doing so within the context of the enterprise rather than the limited scope of a single project. These are issues that we specifically address with the architecture presented in our book.

InfoQ: "Lack of skills" is often considered as the #1 problem in SOA, how can a company go about solving it, beyond sharing Applied SOA with their architects and analysts?

Boris: "Lack of skill" is a serious problem, the question is which skills are we talking about? In my opinion, SOA is more business problem then technical, so bringing together even very skilled technical people will not always solve SOA pain. We need "true architects" to advance SOA. I would love to say - "Buy a book - it will solve all of your problems", but realistically, reading a book may help to pinpoint the issues and evaluate some of the solution approaches, but the genuine solutions still have to come from inside the company  

Kevin: I would like to echo that. We need technologists with a good background in architecture and design, first of all – and some of this guidance is in our book. Probably as important is the fact that companies need analysts who understand the business problems for a particular domain. For example, if you are responsible for an enterprise SOA for a medical community, and you don’t actually have people in your company who intimately understand that medical community’s internal processes, then you aren’t going to do a good job. A company really needs to hire expertise from the business domain they are supporting, because as Boris said, SOA is more about solving business problems than technical.

Michael: There is lots of training available today on important syllabus such as business process design, SOA analysis and design, architecture etc. The important thing for a company to figure out first is what processes and approaches will work for them given their requirements, environment, culture and so on, and then to ramp up on the specific skills that will actually provide them value. Often some sort of competency center is a good approach to building a critical mass of the appropriate skills.

InfoQ: Where are we with SOA today, is it real? where is the level of maturity going? What's next?

Boris: I think SOA today is real, but not mature. The majority of companies are opting for a low hanging fruit - Service Oriented Integration (SOI). As indicated in the latest Barton's group report this is today a prevalent reason for SOA failure. Unless we will start to seriously link SOA with enterprise business model and organizational structure it will probably never fully mature and live to its expectations. 

Kevin: I would say that SOA is real, but the implementing technologies are not yet meeting the vision of what SOA can be. Many vendors push for you to buy their product suite for a “SOA in a box” solution, where interoperability works well if you use all of their products, but not so much if you integrate with other systems. Many web service toolkits have convenient point-and-click component to web service tools, and so developers find it easy to use them, and the result is that the services are no more well-designed than the initial components. Typical POJOs are not initially designed with a corresponding schema in mind, so when they are transformed into web services, semantic operability typically loses as the resulting service is very literally tightly coupled to the object’s implementation.  I do think that the implementing technologies for SOA are getting much better, but it does take some discipline for architects, designers, and programmers to not choose the quick-and-dirty approaches that are offered.

I think the hype factor is dying down on SOA, and that is a good thing. In the past, SOA evangelist salesmen were trying to sell a utopian vision of a SOA that brings world peace and is the silver bullet for all that ails you. It is important to know that SOA can help solve your business problems, and there are many technologies and standards used for SOA implementation. It is also important to know that SOA itself is technology agnostic. I think SOA as an architecture discipline will continue to mature and has a bright future.  

Michael: SOA is definitely real. We know of many companies that have had SOA in place for 10 years. All the major tools, platforms and packaged applications are moving to SOA and more and more SaaS services are available all the time. But, the majority of companies today are just getting started with SOA. On the typical maturity scale of 0 to 5, most are probably around 1, so now is definitely the time for them to focus on architecture rather than services.

Book Review

All syllabus are treated in depth and should be applicable readily to solve a lot of the issues that you might encounter as you implement a Service Oriented Architecture whether you are an architect, analyst, designer or CTO/CIO. All syllabus are illustrated with concrete and relevant examples, directly coming from real-world projects:

The current collection of SOA books and articles is rich on high-level theory by light on practical advice

In particular, this book will help you tie your SOA initiative with your Enterprise Architecture, IT Governance, Core Data and BPM initiatives.

The authors have noted that after working with different companies, there are several common areas of confusion:

  • First, what is SOA, and how it differs from Web Services or other distributed technologies?
  • Second, what is the relationship between the business and SOA?
  • Third, how do you design a good service?
  • Fourth, how do you effectively integrate existing applications and resources into a service-oriented solution?
  • Finally, how do services fit into overall enterprise solutions?

In particular, the authors argue that if it is easy to build "a" service, because the tools have reached a high level of maturity, it is still a challenge to build a "good" service based on solid design principles that fits into an overall architecture and can be combined into larger business processes within the enterprise.

The book is divided into three parts:

  • Understanding SOA
  • Designing SOA
  • Case Studies

A key to understand SOA is to understand the challenges it tackles:

  • Reuse
  • Efficient Development
  • Integration of applications and Data
  • Agility, Flexibility and alignment

From this starting point, the authors develop an understanding of what is needed to achieve these goals (Chapter 3: Getting Started with SOA can be  downloaded here). From their perspective, one must focus on:

  • Methodology and Service lifecycle
  • Reference Architecture
  • Business Architecture
  • Information Design
  • Identifying Services
  • Specifying Services

One of the points that they emphasize is the importance of the Information Architecture when implementing a Service Oriented Architecture and in particular the role its plays in the definition of the service interface.

In the second part, the authors share a deep level of expertise. Their approach is decidedly focused on the knowledge of the business. They rely on business context, business domain and business process models to identify services. Chapter 5 is dedicated to the relationship between information modeling and service design: 

A fundamental difference between service operations and object methods is that service operations have a much larger granularity. Rather than many simple operations with simple parameters, services produce and consume big chunks of a document.

This relationship is one of the toughest problems that needs to be solved in a Service Oriented Architecture, regardless of the technology that you are using. It helps with identifying appropriate interface footprints (which are easier to reuse by new consumers) and it helps with analyzing the impact of information model changes on service interfaces (versioning).

Chapter 6 looks at the design of service interfaces. This chapter reviews interaction styles and provides step-by-step design guidelines to help with each aspect of the service: documents, operations, exceptions...

Chapter 7 provides service implementation guidelines based on a Service Architecture which includes:

  • Service Interface Layer
  • Service Business Layer
  • Resource Access Layer

In particular, the authors provide the detailed responsibilities of each of these layers.

Chapter 7 looks at Service Composition.

Service Composition is one of the great benefits of using SOA

The chapter covers the Architecture Models of service composition as well as the different implementation models which include: plain old code, service component architecture, event-based and orchestration-based composition. The chapter also provides an in-depth discussion about the relationships between compositions and business rules, transactions and human activities. 

Chapter 9 shifts gears and focuses on using services to build enterprise solutions. This syllabu remains one of the least understood amongst architects and analysts.

Building enterprise solutions typically requires leveraging existing enterprise applications for service implementations and combining multiple existing services into enterprise solutions.

The authors offer an adaptation of the Model-View-Controller pattern as the foundation of the architecture of Service Oriented Enterprise Solutions. The chapter also offers important discussion on service versioning, security, exception handling and logging, and management and monitoring, which are all important aspects of enterprise solutions.  

Integration is an important part of SOA. Chapter 10 is dedicated at exploring the role that integration plays in SOA. The authors identify several "islands" that require some levels of integration:

  • Islands of data
  • Islands of automation
  • Islands of security

In the authors' opinion, the role of SOA is to rationalize information, activities and identities to enable existing and new consumers to reuse functionality from legacy systems of record and properly align their state. The chapter provides several patterns which can be used to efficiently implement services from legacy systems.

Chapter 11 introduces the concepts of SOA security. The authors start by providing a thorough introduction to the WS-Security standards and discuss important syllabus such as auditing, authorization and user identity propagation.

Chapter 12 concludes this section with an practical SOA governance and service lifecycle framework. This chapter provides in depth knowledge to identify, implement, deploy, operate and version service. The authors introduce the OMG's Reusable Assets Specification (RAS) that can potentially be used to capture metadata about services. The chapter also covers run-time policies.

The last section introduces two use cases:

  • Travel Insurance
  • Service-Based Integration in Insurance

Each use case is developed in depth with a demo of each artifacts recommended in the previous section of the book. These use cases represent state-of-the-art SOA implementation of enterprise class solutions.

Applied SOA represent a complete introduction to SOA with practical steps to set up an organization capable of delivering complex service oriented solutions.

Wed, 15 Jun 2022 12:00:00 -0500 en text/html
Killexams : Agile and SOA, Hand in Glove?

A lot of people feel that SOA and agile development bite each other; to agile developers architecture represents big upfront design and ‘death by PowerPoint’. To architects, agile developers are like pirates, violating rules and regulations. But if you look past the bias, they actually have a lot in common. We see a lot of value in both and we are passionate about bridging the seemingly widely separated worlds together with you.

For that purpose we examine, in this article, the compatibility between SOA and Agile development. Before we can do that, let’s define the two concepts briefly.

Service Oriented Architecture is an architectural style that is a means to achieve business agility using services that deliver business value.

Agile development is a software development methodology that focuses on human capabilities to deliver business value fast.

Agile, SOA and management

Both Service Oriented Architecture and Agile development processes are relatively new and don’t adhere to traditional management paradigms like scientific management or bureaucracy [Taylor, Weber, Simon].

I even believe that they go beyond contemporary management theories like contingency- (Vroom) and system theory (Checkland). Both represent a fundamental change in the way things are done: the way people look at:

  • organizations (functional & bureaucratic vs. cross functional & process centered)
  • ownership (sole vs. shared),
  • responsibility (forced upon & divided vs. taken upon & shared)
  • management (top down & power vs. meet in the middle & facilitating) and most importantly,
  • at people (people as resource of the organization vs. people that are the organization, [P. Senge, M. Buelens]).

So, we state they have a lot in common: Let’s test this premise, comparing the 12 agile principles ( to SOA principles. To see how well the two fit, and where the potential mismatches occur.

Unlike agile principles, there is not one source for SOA principles, so we picked a source that is often used by architects; the principles that Thomas Erl has published. (

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

This is not as straightforward as it looks on our end of the world. In Anglo Saxon countries (USA, UK) a customer is the end customer, the one that buys the product. In Rijnland countries (The Netherlands, Germany) the customer doesn’t have to be the customer of the company. For example, if you build software for an online bookstore, the customer can be the sales department of the online bookstore, not the person that buys a book.

Valuable is often misinterpreted. The software in the case of the bookstore is valuable if the customer of the bookstore can use it easily to buy books, or if the books are cheaper because we don't need a bookstore. Depending on the operating model of the company, we need a cheap solution, or a very sophisticated solution.

Early is also often misinterpreted. If the online bookstore offers online payment, customers of the bookstore won't be happy if you deliver the software without security in place, or with an ever changing user interface. But the customer (sales department) needs early and continuous delivery to prioritize the requirements, run beta tests, adjust their sales process etc. So, to us this principle is about making software that is valuable to the business.

SOA is about the architecture of the enterprise, using services as a defining concept. A service is an activity that adds value in a process. This activity can be automated by a piece of software, is a human activity or a combination of the two. Tying software to a service, ensures that it adds value, so this principle fits nicely with SOA. There are also differences: In SOA you check if a service already exists before building it ( even when reuse might take longer than rebuilding it. Development time is not necessarily the most important factor, but just one of the factors important for a short time to market. In a SOA you have a holistic approach to the enterprise, not a local development perspective. This means that license cost, maintenance cost and reusability are all factored in. So sometimes you might decide not to build anything, because it does not add any value to the business. Then you might configure what is already there.

Although Agile is more focused on a (single) project or product and SOA focuses on services and the enterprise as a whole, this agile principle, and the SOA principles are perfectly aligned.

Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.

With SOA as a means to realize business value we create a landscape that consists of smaller, traceable services that allow you to assemble your business processes (, rearrange and compose them again and again according to market and customer needs. Because the code isn’t tightly coupled to distinct processes (loosely coupled, and services are autonomous ( you can reuse services easily. An important business driver for SOA is corporate agility. By decoupling service providers from consumers, it becomes easier to change implementations. By using standardization it becomes easier to change. By using separation of concerns, it becomes easier to change. SOA is all about change…as is Agile development.

The editor of this article wanted me to address ‘generalizations’ because it seems that on this syllabu the two approaches clash. If you take the approaches literally, SOA will answer the question “when do I generalize?” with “Always”. This of course makes no sense. Sometimes a problem is very specific to a certain department or capability, or it is very new. In both cases (when it is very specific, or when the problem is not well understood), it makes no sense to generalize the solution. Agile development will answer the question “when do I generalize?” with “Never”. This of course, makes no sense either. Sometimes a problem is very generic in a business. Take for example the need for customer data. If you have the data available in a open well performing application, it makes no sense to duplicate this in every application that needs customer data. It is much easier to integrate with the existing application and focus at the business problem at hand.

So, the extremes of agile and SOA both have disadvantages. Combining the two, will mitigate the risks of both these approaches.

On a side note, SOA is not just about reuse and generalization; it is about creating small units (services) that deliver value instead of hard to change silos. This way it is both easier to change and easier to reuse. This design principle is very similar to agile design principles (separation of concerns, avoid waste etc.) that are often used in software development.

To conclude: We can learn a lot from each other here: SOA’s got the big picture, agile delivers the small steps (think big, start small, think global act local). Which brings us to the next Agile principle:

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

To gain as much momentum and to prove services work, you best deliver working software regularly and celebrate to help to energize and motivate people. This can be accomplished by working iteratively and incrementally.

Get sponsoring for top down identification of valuable business processes one after the other and translate them into, amongst others, reusable technical transformations and data services. Start small and build your SOA one service and one process at a time.

In an Agile software project usually small (elementary) services are delivered. Standish group calls these stepping stones. To make sure that the stepping stones are actually taking you in the direction you need, and not just moving in circles, calls for adherence to some pretty firm principles. The target architecture and guidelines for the services can be used for that purpose ( for example). This means that architecture needs to be taken into account by the product owner and is a constraint in the product backlog. Because of the loose coupling ( in SOA, technology can be chosen at the moment a concrete problem arises but architecture is a guideline for all. By delivering these stepping stones one by one regularly and frequent in your agile project you can eventually build you SOA. A large organization’s Service Oriented Architecture can’t be built at once. To use a famous analogy: the way to eat an elephant is one bite at a time. Again think BIG but take small steps.

A good practice from Agile development is to make sure that every effort results in value for the customer immediately. If we apply this to SOA, we only build services as they are needed in the context of a project that is needed by the business. Services can be identified either top down or bottom up. But they are always built in context of a project with end to end requirements. This mitigates the risk of any SOA endeavor: building lots of services that are never going to be used anywhere.

A challenge for both Agile and SOA is to integrate iteration planning with program management and portfolio management. It is also challenging to tune releases to what is manageable by operations, business users and administrators. Often these environments need to adjust to the different pace and smaller chunks of functionality (services) that is delivered.

Business people and developers must work together daily throughout the project.

This principle has implications for the software development process. You can argue that the same is true for architecture; business people and architects must work together daily. Information and technology are more and more of strategic importance in a lot of industries, like government, financial services, utility companies etc. Therefore, Business and IT should form a corporate strategy together. Projects only succeed when people of several disciplines work together. Services can only be realized in cooperation between humans in IT and business. Services are important to business and must have business value; architects, developers and testers must understand this to deliver those magnificent services to the business. This should apply to any project. A common vocabulary or common understanding can help all in communications.

So working together is something that agile development and SOA share. Of course there are some important differerences. The most notable is, that architecture is a process, not a project. Of course, you can start projects to realize parts of the architecture, and architecture is an important part of a project as well. The added value of SOA in this case, is that architects talk and work together with other business people as well, not just the project stakeholders. This enables them to spot possible improvements and inconsistencies in the features on the product backlog.

Build projects around motivated individuals. give them the environment and support they need, and trust them to get the job done.

Agile clearly mentions the value of people and their craftsmanship. Only motivated people with a clear understanding of vision, mission and strategy towards company goals, encouraged and facilitated by their leaders and colleagues, will get the job done. However SOA doesn’t address this it is understood that different process- and / or domain owners have to be motivated to use each other’s services. To cooperate and reuse services you need to trust decisions made, and rely on the services others deliver. In a specific project, developers need to trust a colleague’s expertise to reuse services that aren’t designed by them. Service contracts by itself or enforcing reuse will never replace motivation or trust. It is just a fact of life.

In real life both ‘Agile’ and SOA are confronted with fear of the unknown and doubt whether Agile and SOA principles are the right thing to implement. So you have to teach and coach and then let your trusted people decide how to implement.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

This has to do with good communication. Within a development team, face-to-face communication works just fine. Between teams (either in time or place), you need to communicate differently. The important thing is, that you only add value. In order to be able to reuse services, discoverability is an important principle ( You write whatever documentation you need for other people to (re)use your products and services. Nothing more, nothing less. Again, this is another item on the product backlog. Of course, that is not all you need to do, to make sure that people understand what the services are all about. Architects should spent most of their time in face-to-face communication. Communication with the business, to understand the strategy and objectives. Communication with the developers to talk about the trade-off between fast delivery and easy maintenance, explanation of the meaning and use cases of the existing services etc, etc. Shortcomings on highly technical oriented SOA people is that they think that SLA’s and registries etc can replace human communication. However SOA doesn’t say anything about this condition for success, it still depends face to face conversation, like any other achievement, for that matter.

Working software is the primary measure of progress.

In projects for realizing services or SOA, the ultimate goal is to build business processes that add value to the business. This can be implemented by a piece of software or by a human service. In SOA you can make progress by building nothing, but by just changing some human activities. The measure of progress is a business measure: like the number of sales, profit margins, or cost cuttings.

So, this is a matter of scope: Agile development is about software development. If you build software, working software should be the primary measure of progress. SOA is about an Enterprise Architecture, so progress is measured by accomplishing business goals in general.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

This principle is about making the life’s of workers good or bad. Happy workers, whether or not they are architects or developers, are much more successful, work smarter and deliver better work when they don’t have to work under pressure, have to work late, have to work on more than one or two projects at the same time or make longer hours than is healthy for them. This principle should be promoted in any project. This is very sensible and doesn’t clash with SOA.

Continuous attention to technical excellence and good design enhances agility.

Agile teams will make technology meet the expectations of the business and make it work: ubiquitous and reliable. SOA demands a high level of technical excellence and design because it demands loose coupling, reusability and abstraction. These principles require extra attention and learning. Agile developers will have to not just design their classes and methods, they have to involve themselves designing the whole system by helping architects and business to design the whole system, together. This gives the Agile design sessions a compelling goal: To look beyond the scope of a single project.

Some developers tell me that high degree standardization (and architecture, for that matter) is seen as unnecessary constraint which is killing their creativity. To use an analogy again: if you need a package delivered as fast as possible, it might be better if there were no speed limits. For the individual delivery company it might seem faster to just ignore all the rules. But for the customer that ordered the package, there is more in life than just the delivery of this one package. He or she also want your kid to be able to walk to school safely. So, while the package delivery company can achieve technical excellence and good design they will have to take into account the environment as a whole. This is where SOA can be very helpful. Because of the notion of small units, there are only a couple of guidelines that any project needs to adhere to. Of course, these guidelines need to be evaluated and improved all the time. Standards should be enabling, not restricting just for the sake of it. Compare this to Sweden, where they decided it would be easier to drive at the right side of the road, like the rest of Europe. They changed the rule when this was the right thing to do.

For both SOA and Agile it is very important to apply and adopt practices and principles consistently and disciplined.

Simplicity--the art of maximizing the amount of work not done--is essential.

SOA defines services as the main deliverable. This is a realization of simplicity, having smaller building blocks that adhere to certain principles (loose coupling, autonomy, composability etc). This approach also helps to define when and what type of software is actually needed. When you only build the software that supports the identified and not yet realized services, you can maximize the amount of work not done. We have a saying. “Je gaat niet iets bouwen als je het ook op de achterkant van een sigarendoosje kunt bijhouden”. Literally it says: ’ You’re not going to build something (software) if you can track it by writing it down on the back of a cigar box”. We apply this on our SOA practices too.

The best architectures, requirements, and designs emerge from self-organizing teams.

This principle could be applied to SOA as well as to agile software development. In fact, SOA in itself does not prescribe how it should be achieved. You can do this with self organizing teams, or by prescribing an architecture that is ordained by the CIO.

Realizing a SOA, as any other accomplishment, requires people of all disciplines that are all knowledgeable in their respective fields. These people don’t need to be steered, if they like to work together. Coordination between business people, architects, process modellers, testers and the service developers is required and multidisciplinary teams need to be created. The important thing here is that, architecture is required because the solutions (services) have to fit into the big picture. Architecture has to be part of the self organizing team. This has to be organized in some way, it is taken from team to team. Agile practices and self organizing alone can’t solve those architectural challenges.

Solution architecture for instance, can be applied in an agile manner, through refactoring, designed in small increments and expressed mainly in code. Architecture then emerges. This is the bottom up part of architecture that meets in the middle with the top-down part.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly

Because SOA does not say anyting about the way to achieve it, just what principles and building blocks we need, this principle could easily be applied to the architectural effort.

In general you have to continuously test and evaluate you efforts, measure quality, learn, learn more and refractor for designing and building reusable and valuable services. Some high level but defined quality standards must be put in place. SOA gives the team a concrete product (the service) to talk about and a set of principles to measure its quality. A retrospective centered on the services that were build, can contain questions like:

  • How can we Improve reusability?
  • How can we make our service more valuable in the eyes of the stakeholder (customer)?
  • Which parts of the system could we replace by existing services?How can we deliver our next service faster with fewer bugs?
  • Can we generalize?

Daily stand-ups, continuous integration and iteration demo’s can help to foster inspection and adaption of the architecture as well as the development of software.

In general, Agile is the hand that works in the glove. SOA is the glove, the scope is enterprise wide, agile development is about the way you can develop the part that is supported by software. Most principles of SOA and Agile are not in conflict. Whenever they are, they keep each other sane. Agile development without a clear vision of the goals and objectives of the company is futile. Service oriented architecture without a clear vision how to make it real using agile development principles is a waste of time and money.

SOA and Agile are about agility. (this applies to a number of principles that don't clash) SOA is about architectural structure and agile development is about delivering fast. This way they keep each other in balance. One does not make sense without the other.

About the author

Mary Beijleveld graduated in business administration (MScBA) at the University of Groningen. She works as a senior business consultant, business architect and Agile project manager at Approach in Nijkerk, the Netherlands. Her focus area is the business value of architecture in general and SOA and BPM in particular. She prefers working at the intersection of business and technology, where complex issues have to be addressed and pressure to make practical improvements is high. In addition to that she has a passion to Agile project- and development methods, networking, blogging and writing opinionating articles. She is a certified scrum master and has long-term experience in managing projects, issue control and has broad knowledge of several project management methods. Read her most latest blog.

Wed, 27 Jul 2022 12:00:00 -0500 en text/html
Killexams : OPC: Interoperability standard for industrial automation
  • By Thomas J. Burke
  • System Integration
OPC: Interoperability standard for industrial automation
In today’s complex economy, information is the key to business success and profitability
By Thomas J. Burke

The OPC Foundation is working with consortia and standard development organizations to achieve the goals of superior production with digitalization. The year 2018 has been an interesting, record-breaking year, with end users, system integrators, and suppliers focused on maximizing their engineering investments and increasing productivity. End users are capitalizing on the data and information explosion. Consortia and standard development organizations (SDOs) are helping suppliers to exceed expectations.

Integration opportunity

Information integration requires standards organizations to work together for interoperability with synergistic opportunities to address convergence and to prevent overlapping complex information model architectures. The standards organizations have been working independently, and now it is time for them work to together to harmonize their data models with other standard organizations. The criteria for success for an SDO should be measured by the level of open interoperability provided.

When OPC UA was first conceived, it focused on developing a strategy for platform independence and a solution that allowed the operational technology (OT) and information technology (IT) worlds to communicate, have seamless interoperability, and be able to agree on syntactical and semantic data exchange formats.

The OPC Foundation started developing a service-oriented architecture, recognizing the opportunity to separate the services from the data. It consciously developed a rich, complex information model that allowed the OPC data to be modeled from the OPC classic specifications.

OPC Foundation

The mission of the OPC Foundation is to manage a global organization in which users, vendors, and consortia collaborate to create standards for multivendor, multiplatform, secure, and reliable information integration interoperability in industrial automation and beyond. To support this mission, the OPC Foundation creates and maintains specifications, ensures compliance with OPC specifications via certification testing, and collaborates with standards organizations.

OPC technologies were created to allow information to be easily and securely exchanged between diverse platforms from multiple vendors and to allow seamless integration of those platforms without costly, time-consuming software development. This frees engineering resources to do the more important work of running the business. Today, there are more than 4,200 suppliers who have created more than 35,000 different OPC products used in more than 17 million applications. The estimate of the savings in engineering resources alone is in the billions of dollars. The OPC Foundation strategy is:

  • rules for OPC UA Companion Specifications developed together with partners
  • predefined process for joint OPC UA companion specifications
  • templates to ensure standardized format and potential certifications
  • compliance
  • intellectual property
  • working processes

The OPC Foundation is focused on evangelizing the OPC UA information framework and collaborating with standards organizations and consortia to incorporate data models that reflect the knowledge of their subject-matter experts.

Information models

OPC UA, beyond being a secure, interoperable standard for moving data and information from the embedded world to the cloud, is an open architecture for a wide range of application information models that add meaning and context to data. Information modeling allows organizations to plug their complex information models into OPC UA. This brings information integration and interoperability across disparate devices and applications. Using the common OPC UA framework was a way for all standards organizations to seamlessly connect their data between the IT and OT worlds. This greatly simplifies the end user's task of digitalization.

Service-oriented collaborative architecture

The OPC Foundation collaboration across many organizations is a very important part of the OPC UA service-oriented architecture that lets other organizations model their data and have it seamlessly and securely connected. The concept is simple. An organization develops its data model, mapping it to an OPC UA information model. Vendors can build a server that publishes information, providing the appropriate context, syntax, and semantics. Client applications or subscribers can discover and understand the syntax and semantics of the data model from the respective organizations. An OPC UA server is a data engine that gathers information and presents it in ways that are useful to various types of OPC UA client devices. Devices could be located on the factory floor, like a human-machine interface, proprietary control program, historian database, dashboard, or sophisticated analytics program that might be in an enterprise server or in the cloud.

The initial collaboration that the OPC Foundation engaged with was called OpenO&M, which was a cooperation between OPC Foundation, MIMOSA, ISA95, and OAGIS. This first collaboration resulted in several OPC UA companion specifications that were focused at the IT world and integration with the factory floor. The graphic shows the logos of the numerous standards organizations that the OPC Foundation has partnered with. These specifications allow generic applications to connect to different devices and applications to discover and consume the data and information.

Fast forward to late 2018, and the OPC Foundation has now partnered with more than 40 different standards organizations. These organizations include every major fieldbus organization, robotics, machine tools, pharmaceutical, industrial kitchens, oil and gas, water treatment, manufacturing, automotive, building automation, and more. All of these organizations are now developing or have already released OPC UA companion specifications, and these organizations can take advantage of the service-oriented architecture of OPC UA.

Some of the more important consortia that are predominantly end-user driven include the oil and gas industry, pharmaceutical NAMUR, and VDMA (the Mechanical Engineering Industry Association). There is also a lot of energy being "energized" in the energy industry (no pun intended). There are exciting trade shows in the machine tool industry and the packaging industry. Significantly, suppliers and end users are realizing the volume of data from all the devices and applications that needs to be turned into useful information.

One of the most exciting organizations that the OPC foundation has collaborated with is VDMA, representing more than 3,200 companies in the subject-matter-expert dominated mechanical and systems engineering industry in Germany and the rest of Europe. It represents the breadth of the manufacturing industry developing and leveraging standards across multiple industries.

The OPC Foundation activities include collaborations with a number of industries and applications, including automotive, building automation, energy, oil and gas, robotics, welding, pharmaceutical serialization, transportation, machine tools, product life-cycle management.


Governments and regulatory agencies are now becoming actively engaged in the standard-setting process. Industrie 4.0 started in Germany and has spawned a number of regional equivalents throughout the world that are accelerating standards development and adoption for complete system-wide interoperability. Examples include Industry 4.0 concepts being adopted in countries with various initiatives that include Made in China 2025, Japan Industrial Value Chain Initiative (IVI), Make in India, and Indonesia 4.0. Clearly there is a future for the holistic automation, business information, and manufacturing execution architecture to Improve industry with the integration of all aspects of production and commerce across company boundaries for greater efficiency.

A lot is happening in the world of open standards. The OPC Foundation is tightly engaged in collaboration with a multitude of organizations and is reaching across to other verticals beyond the domain of industrial automation.

Vertical integration

The whole concept of IT and OT convergence is very important to the suppliers and even more important to the end users, because they want a strategy and a vertical integration from the plant floor (sometimes called the shop floor) to the top floor or enterprise. What is most important in this equation of vertical integration of data from the plant floor's variety of field devices can be consumed and then turned into useful information as it goes up the food chain to the enterprise. Essentially, data becomes information as it is converted in the different layers of the vertical integration architecture.

Integration is bidirectional between sensors and controllers and the enterprise/cloud, communicating all types of information, including control parameters, set points, operating parameters, real-time sensor data, asset information, real-time tracking, and device configurations. This architecture creates the basis for digitalization with intelligent command-and-control to Improve productivity, drive make-to-order manufacturing, Improve customer responsiveness, and achieve agile manufacturing and profits.

OPC collaboration process

The OPC Foundation strategy is pretty simple. It has an established set of processes, so organizations can work together to develop OPC UA companion specifications complete with templates for the standardized format of the data to be understood and consumed generically. It establishes working groups and protects the intellectual property. All of the companion specifications become open standards to facilitate the whole vision of success measured by the level of adoption of the technology.

The OPC Foundation also has the certification program, which allows the companion specifications to be certified for interoperability.


The industrial and process manufacturing industries have realized they can Improve production by using data to gain insights and to optimize. This is leading to the movement toward digital manufacturing, which is the syllabu of many new conferences all over the world on big data, machine learning, artificial intelligence, Industrial Internet of Things (IIoT), IoT, cloud computing, edge computing, and the fog. End users and suppliers are overwhelmed with all these new innovations and are sorting out what makes sense to leverage from a business value perspective to maximize their effectiveness in daily production operations. Collaboration between the OPC Foundation and a wide range of other industry organizations is bringing clarity.

Reader Feedback

We want to hear from you! Please send us your comments and questions about this syllabu to

Wed, 16 Sep 2020 19:52:00 -0500 en text/html
Killexams : The Open Process Automation Standard takes flight
  • By Dave Emerson
  • Cover Story
The Open Process Automation Standard takes flight

A detailed look at O-PAS™ Standard, Version 1.0

By Dave Emerson

Process automation end users and suppliers have expressed interest in a standard that will make the industry much more open and modular. In response, the Open Process Automation™ Forum (OPAF) has worked diligently at this task since November 2016 to develop process automation standards. The scope of the initiative is wide-reaching, as it aims to address the issues associated with the process automation systems found in most industrial automation plants and facilities today (figure 1).

It is easy to see why a variety of end users and suppliers are involved in the project, because the following systems are affected:

  • manufacturing execution system (MES)
  • distributed control system (DCS)
  • human-machine interface (HMI)
  • programmable logic controller (PLC)
  • input/output (I/O)

In June 2018, OPAF released a technical reference model (TRM) snapshot as industry guidance of the technical direction being taken for the development of this new standard. The organization followed the TRM snapshot with the release of the OPAS™ Version 1.0 in January 2019. Version 1.0 addresses the interoperability of components in federated process automation systems. This is a first stop along a three-year road map with annual releases targeting the themes listed in table 1.

Table 1. The O-PAS Standard three-year release road map addresses progressively more detailed themes.


Target date







Configuration portability



Application portability


By publishing versions of the standard annually, OPAF intends to make its work available to industry expeditiously. This will allow suppliers to start building products and returning feedback on technical issues, and this feedback-along with end user input-will steer OPAS development. O-PAS Version 1.0 was released as a preliminary standard of The Open Group to allow time for industry feedback.

The OPAF interoperability workshop in May 2019 is expected to produce feedback to help finalize the standard. The workshop allows member organizations to bring hardware and software that support O-PAS Version 1.0, testing it to verify the correctness and clarity of this preliminary standard. The results will not be published but will be used to update and finalize the standard.

Cover Story Fig 1
Figure 1. A broad sampling of suppliers and end users are highly interested in the scope of the OPAS under development by OPAF, because it touches on all the key components of industrial automation systems: hardware (I/O), the communication network, system software (e.g., run time, namespace), application software, and the data model. 

Some terminology

For clarity, a summary of the terminology associated with the OPAF initiative is:

  • The Open Group: The Open Group is a global consortium that helps organizations achieve business objectives through technology standards. The membership of more than 625 organizations includes customers, systems and solutions suppliers, tool vendors, integrators, academics, and consultants across multiple industries.
  • Open Process Automation Forum: OPAF is an international forum of end users, system integrators, suppliers, academia, and other standards organizations working together to develop a standards-based, open, secure, and interoperable process control architecture. Open Process Automation is a trademark of The Open Group.
  • O-PAS Standard, Version 1.0 (O-PAS): OPAF is producing the OPAS Standard under the guidance of The Open Group to define a vendor-neutral reference architecture for construction of scalable, reliable, interoperable, and secure process automation systems.

Standard of standards

Creating a "standard of standards" for open, interoperable, and secure automation is a complex undertaking. OPAF intends to speed up the process by leveraging the valuable work of various groups in a confederated manner.

The OPAS Standard will reference existing and applicable standards where possible. Where gaps are identified, OPAF will work with associated organizations to update the underlying standard or add OPAS requirements to achieve proper definition. Therefore, OPAF has already established liaison agreements with the following organizations:

  • Control System Integrators Association (CSIA)
  • Distributed Management Task Force (DMTF), specifically for the Redfish API
  • FieldComm Group
  • Industrial Internet Consortium (IIC)
  • International Society of Automation (ISA)
  • OPC Foundation
  • PLCopen
  • ZVEI

Additionally, OPAF is in discussions with AutomationML and the ISA Security Compliance Institute (ISCI) as an ISA/IEC 62443 validation authority. In addition to these groups, the OPC Foundation has joined OPAF as a member, so no liaison agreement is required.

As an example of this cooperation in practice, OPAS Version 1.0 was created with significant input from three existing standards, including:

  • ISA/IEC 62443 (adopted by IEC as IEC 62443) for security
  • OPC UA adopted by IEC as IEC 62541 for connectivity
  • DMTF Redfish for systems management (see

Next step: Configuration portability

Configuration portability, now under development for OPAS Version 2.0, will address the requirement to move control strategies among different automation components and systems. This has been identified by end users as a requirement to allow their intellectual property (IP), in the form of control strategies, to be portable. Existing standards under evaluation for use in Version 2.0 include:

  • IEC 61131-3 for control functions
  • IEC 16499 for execution coordination
  • IEC 61804 for function blocks

O-PAS Version 3.0 will address application portability, which is the ability to take applications purchased from software suppliers and move them among systems within a company in accordance with applicable licenses. This release will also include the first specifications for hardware interfaces.

Under the OPAS hood

The five parts that make up O-PAS Version 1.0 are listed below with a brief summary of how compliance will be Verified (if applicable):

  • Part 1 — Technical Architecture Overview (informative)
  • Part 2 — Security (informative)
  • Part 3 — Profiles
  • Part 4 — Connectivity Framework (OCF)
  • Part 5 — System Management

Part 1 - Technical Architecture Overview (informative) describes an OPAS-conformant system through a set of interfaces to the components. Read this section to understand the technical approach OPAF is following to create the O-PAS.

Part 2 - Security (informative) addresses the necessary cybersecurity functionality of components that are conformant to OPAS. It is important to point out that security is built into the standard and permeates it, as opposed to being bolted on as an afterthought. This part of the standard is an explanation of the security principles and guidelines that are built into the interfaces. More specific security requirements are detailed in normative parts of the standards. The detailed normative interface specifications are defined in Parts 3, 4, and 5. These parts also contain the associated conformance criteria.

Part 3 - Profiles  defines sets of hardware and software interfaces for which OPAF will develop conformance tests to make sure products interoperate properly. The O-PAS Version 1 profiles are:

  • Level 1 Interoperability Hardware Profile: A certified product claiming conformance to this profile shall implement OSM-Redfish.
  • Level 2 Interoperability Hardware Profile: A certified product claiming conformance to this profile shall implement OSM-Redfish BMC.
  • Level 1 Interoperability Software Profile: Software claiming conformance to this profile shall implement OCF-001: OPC UA Client/Server Profile.
  • Level 2 Interoperability Software Profile: Software claiming conformance to this profile shall implement OCF-002: OPC UA Client/Server and Pub/Sub Profile.

The term "Level" in the profile names refers to profile levels.

Part 4 - Connectivity Framework (OCF) forms the interoperable core of the system. The OCF is more than just a network, it is the underlying structure allowing disparate components to interoperate as a system. The OCF will use OPC UA for OPAS Versions 1.0, 2.0, and 3.0.

Part 5 - System Management covers foundational functionality and interface standards to allow the management and monitoring of components using a common interface. This part will address hardware, operating systems and platform software, applications, and networks-although at this point Version 1.0 only addresses hardware management.

Conformance criteria are identified by the verb "shall" within the O-PAS text. An OPAF committee is working on a conformance guide document that will be published later this year, which explains the conformance program and requirements for suppliers to obtain a certification of conformance.

Technical architecture

The OPAS Standard supports communication interactions that are required within a service-oriented architecture (SOA) for automation systems by outlining the specific interfaces the hardware and software components will use. These components will be used to architect, build, and start up automation systems for end users.

The vision for the OPAS Standard is to allow the interfaces to be used in an unlimited number of architectures, thereby enabling each process automation system to be "fit for purpose" to meet specific objectives. The standard will not define a system architecture, but it will use examples to illustrate how the component-level interfaces are intended to be used. System architectures (figure 2) contain the following elements:

Distributed control node (DCN): A DCN is expected to be a microprocessor-based controller, I/O, or gateway device that can handle inputs and outputs and computing functions. A key feature of O-PAS is that hardware and control software are decoupled. So, the exact function of any single DCN is up to the system architect. A DCN consists of hardware and some system software that enables the DCN to communicate on the O-PAS network, called the OCF, and also allows it to run control software.

Distributed control platform (DCP): A DCP is the hardware and standard software interfaces required in all DCNs. The standard software interfaces are a common platform on top of which control software programs run. This provides the physical infrastructure and interchangeability capability so end users can control software and hardware from multiple suppliers.

Distributed control framework (DCF): A DCF is the standard set of software interfaces that provides an environment for executing applications, such as control software. The DCF is a layer on top of the DCP that provides applications with a consistent set of O-PAS related functions no matter which DCN they run in. This is important for creating an efficient marketplace for O-PAS applications.

OPAS connectivity framework (OCF): The OCF is a royalty-free, secure, and interoperable communication framework specification. In O-PAS Version 1, the OCF uses OPC UA.

Advanced computing platform (ACP): An ACP is a computing platform that implements DCN functionality but has scalable computing resources (memory, disk, CPU cores) to handle applications or services that require more resources than are typically available on a small profile DCP. ACPs may also be used for applications that cannot be easily or efficiently distributed. ACPs are envisioned to be installed within on-premise servers or clouds.

Within the OPAS Standard, DCNs represent a fundamental computing building block (figure 3). They may be hardware or virtual (when virtual they are shown as a DCF as in figure 2), big or small, with no I/O or various amounts. At the moment, allowable I/O density per DCN is not settled, so some standardization in conjunction with the market may drive the final configuration.

DCNs also act as a gateway to other networks or systems, such as legacy systems, wireless gateways, digital field networks, I/O, and controllers like DCS or PLC systems. Industrial Internet of Things (IIoT) devices can also be accessed via any of these systems.

Cover Story Fig 2
Figure 2. OPAS establishes a system architecture organizing process automation elements into interoperable groupings.

Building a system

End users today must work with and integrate multiple systems in most every process plant or facility. Therefore, the OPAS Standard was designed so users can construct systems from components and subsystems supplied by multiple vendors, without requiring custom integration. With the OPAS Standard it becomes feasible to assimilate multiple systems, enabling them to work together as one OPAS-compliant whole. This reduces work on capital projects and during the lifetime of the facility or plant, leading to a lower total cost of ownership.

By decoupling hardware and software and employing an SOA, the necessary software functions can be situated in many different locations or processors. Not only can software applications run in all hardware, but they can also access any I/O to increase flexibility when designing a system.

One set of components can be used to create many different systems using centralized architectures, distributed architectures, or a hybrid of the two. System sizes may range from small to large and can include best-in-class elements of DCS, PLC, SCADA, and IIoT systems and devices as needed.

Information technology (IT) can also be incorporated deeper into industrial automation operational technology (OT). For example, DMTF Redfish is an IT technology for securely managing data center platforms. OPAF is adopting this technology to meet OPAS system management requirements.

Comprehensive and open

Each industrial automation supplier offers a variety of devices and systems, most of which are proprietary and incompatible with similar products from other vendors and sometimes with earlier versions of their own products. End users and system integrators trying to integrate automation systems of varying vintages from different suppliers therefore have a challenging job.

To address these issues, OPAF is making great strides toward assembling a comprehensive, open process automation standard. Partially built on other established industry standards, and extending to incorporate most aspects of industrial automation, the O-PAS Standard will greatly Improve interoperability among industrial automation systems and components. This will lower implementation and support costs for end users, while allowing vendors to innovate around an open standard.

For more information on OPAS Version 1.0, please get the standard at Submit feedback by emailing

Cover Story Fig 3
Figure 3. DCNs are conceived as modular elements containing DCP (hardware) and DCF (software), both of which are used to interface field devices to the OCF.

Reader Feedback

We want to hear from you! Please send us your comments and questions about this syllabu to

Thu, 06 Jun 2019 04:53:00 -0500 en text/html
Killexams : Zero-Trust Architecture Is Incomplete Without Digital Signatures

Zero trust is often mistakenly understood as merely a matter of cybersecurity; however, adhering to zero trust is a crucial factor in agency IT modernization.

By Geoff Mroz, Principal Digital Strategist, Adobe

By design, zero trust mandates that all resources, regardless of physical or network location, undergo verification, authentication, and thorough authorization before being allowed access to another resource. The Office of Management and Budget has set 2024 as the target date for the completion of a zero trust architecture throughout the Federal government.

Agencies are simultaneously in the midst of the largest digital transformation they have ever undertaken. The pandemic expedited this change, often resulting in hasty and makeshift solutions. On the path to lasting modernization, zero trust must be assumed in every digital interaction, of which signatures are among the most prolific.

Accelerating the use of e-signatures is a priority identified for all agencies in the “Executive Order on Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government.” E-signatures can dramatically reduce paperwork, broaden the accessibility of government services, and streamline cumbersome approval processes.

However, it is imperative that e-signatures meet the security standards set out by the OMB’s zero trust memorandum. In instances where additional levels of assurance (LOA) are necessary, digital signatures are preferable to e-signatures.

What to look for in a digital signature solution  

Security requirements for e-signatures vary by region, agency, data and classifications levels. E-signatures use common authentication methods such as passwords or email verification, but for sensitive information, such minimal precautions are not nearly sufficient.

There are some cases where additional LOA for signer identification are needed, and that’s where digital signatures come in. Digital signatures are a specific type of e-signature that is backed by a digital certificate as proof of a signer’s identity that is cryptographically bound to the signature field using public key infrastructure (PKI).

To achieve this strong security posture, digital signatures must uniquely identify each signer. Furthermore, the signer’s identity must be reconfirmed prior to signing with tools such as a PIN or a secure signature device like a USB token or cloud-based hardware security module. Digital signatures must also demonstrate proof of signing with a tamper-evident seal and have the ability to re-confirm authenticity for at least 10 years.

For agencies seeking to liberate themselves from arduous paper-based authorizations, while also adhering to zero trust’s strict identity and access management standards, digital signatures are an invaluable tool.

Government agencies are eager to adopt digitization practices, such as digital signatures, that will simplify their workload and make the lives of everyday citizens easier. However, security is paramount. To ensure any solutions adopted by agencies to meet their individual security needs, the Federal Risk and Authorization Management Program (FedRAMP) was created.

How FedRAMP authorization provides peace of mind 

FedRAMP authorizes cloud-based solutions for government agencies at Low, Moderate, and High Impact levels. The Moderate Impact level accounts for 80% of authorizations and is designed to protect sensitive data, such as personally identifiable information (PII). Furthermore, the FedRAMP Moderate designation aligns with NIST controls for Zero Trust. Encryption management and is FIPS 140-2 verified, which ensures that cryptographic modules have met NIST security requirements.

With over 325 security controls Verified by third-party auditors, agencies can have confidence in their FedRAMP Moderate tools to ensure the protection of sensitive information and compliance with any zero trust architecture.

In every department, at every level, both internally and externally, signatures are required to keep track of approval processes and decision making. In the modern, hybrid world, paper signatures are no longer feasible, and government agencies should not be trapped in the past because their security standards are inherently stricter.

Digital signatures can be used for things like government benefits applications, healthcare forms, and other documents that are part of higher-value, higher-risk, or strictly regulated processes.  A FedRAMP Moderate digital signature solution eases the signing experiences for employees and constituents, meaning public interactions can have the speed, ease, and security that modern government requires.

The modern government mindset  

Additionally, an inevitable factor in the conversation around IT modernization in the federal government is interoperability. With over 100 federal agencies, it is crucial that any new tools integrate seamlessly with existing software and function effectively across agencies.

When considering the capacity for interoperability and integration in federal agencies, digital signature solutions should be compatible with personal ID verification (PIV) cards, common access cards (CAC) and mobile credentials.

When prioritizing efficiency, solutions capable of wrapping document creation, signature capture, tracking, and archiving into a consolidated secure workflow are preferable as they relieve agency employees from the burden of double and triple checking if their documents and get credentials comply with agency rules.

Achieving digital transformation goals, zero trust architecture, and cross-agency collaboration should not be viewed as competing priorities. In fact, the three should be understood as part of the same, modern government mindset. With the right digital signature tool, agencies can satisfy a component of all these objectives at once.

While digitization tools can unlock unprecedented capabilities for government, protecting citizen and agency data is critical. Agency IT leaders should seek out FedRAMP certified solutions that can enable them to work effectively in a digital world, without compromising on security.

About the Author

Geoff Mroz AuthorGeoff is the Principal Digital Strategist at Adobe. In his nearly 14 years with the company, Geoff has been an invaluable solutions consultant who consistently strives for digital innovation. He possesses extensive knowledge of enterprise security architecture, full lifecycle enterprise application development, agile coding, enterprise applications integration, SOA, and UI/X design and construction across a wide range of technical architectures. Perhaps even more important, Geoff has a knack for bringing people, businesses, and technology together to help companies deliver on the promise of digital transformation and creative productivity.

Geoff can be reached online at and at our company website

FAIR USE NOTICE: Under the "fair use" act, another author may make limited use of the original author's work without asking permission. Pursuant to 17 U.S. Code § 107, certain uses of copyrighted material "for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright." As a matter of policy, fair use is based on the belief that the public is entitled to freely use portions of copyrighted materials for purposes of commentary and criticism. The fair use privilege is perhaps the most significant limitation on a copyright owner's exclusive rights. Cyber Defense Media Group is a news reporting company, reporting cyber news, events, information and much more at no charge at our website Cyber Defense Magazine. All images and reporting are done exclusively under the Fair Use of the US copyright act.

Tue, 26 Jul 2022 01:04:00 -0500 en-US text/html
Killexams : Software development tools No result found, try new keyword!According to research by Ordnance Survey, sustainability projects are emerging as hot zones for software developers, attracted by the chance to do some good as well as make good money ... Mon, 04 Jul 2022 12:00:00 -0500 en text/html Killexams : Tag "SOA" No result found, try new keyword!Indian telecom major Bharti Airtel has recently adopted Oracle solutions to build a standard service-oriented architecture for all its ines of businesses (LOB). The telecom operator has four ... Fri, 19 Jun 2020 09:09:00 -0500 text/html Killexams : Shell 3D Visualization Center


To foster new knowledge and insight, support interdisciplinary research, and drive integration between research computing, data science, visualization, human interaction, and data-capture technologies by leveraging state and national opportunities.


Since being established in 2014, the 3D Visualization Center has enabled UW to develop a community of innovative visualization users seeking to enhance their teaching, research, and entrepreneurial activity. We provide support on collaborative multidisciplinary grants and external service-oriented contracts. We also engage with the community to nurture update of digital technologies.

Housed in the Energy Innovation Center (EIC), the facility is home to the only four-walled 3D CAVE in Wyoming. We strive to expand both interest in and access to virtual reality (VR) and other emerging computer technologies, at UW and throughout the state. We provide hands-on digital workshops, short courses, and facility tours to service the academic, educational, and business communities. We offer immersive 3D experiences, data-capture technology, and content-creation services to help analyze, interpret, and share a wide variety of data.

The Viz Center partners with UW's Advanced Research Computing Center (ARCC) to provide clients and users with access to additional computing resources, storage, and other services often needed for research.

Our work

The 3D Viz Center supports multidisciplinary research and teaching on campus and beyond:

  • GIS data visualizing the Wyoming subsurface for use in energy research

  • 360° data-capture to support stakeholder engagement with research and broader impacts

  • Educational games to engage the K-12 community on the importance of pollinators in Wyoming ecosystems

  • A 3D-CAVE application to support the teaching of molecular composition to K-12 students

  • A 3D brain model to support Psychology and Health Sciences in exploring brain anatomy

  • Collaborative design reviews for architecture students in the 3D CAVE and head-mounted virtual reality devices

  • Ground-based LiDAR scanning to create training tools for mining educators

  • A soil biome project designed to document and educate users about soil composition and formation processes throughout the country

For more about projects past and present, please check out our VizAbility page!


Our developers proudly support a wide variety of capabilities, including:

  1. 3D Scientific/Creative Dataset Creation and Support

  2. Short Courses

  3. Hardware Hire

    • 3D CAVE

    • Head-mounted VR displays

    • High-end AR headsets

    • 360° stereo cameras

    • Ground-based LiDAR

  4. 3D Digital Asset Creation

  5. Data Capture Services


Located in the EIC building on UW's Laramie campus, we have a stunning array of technology to use and share!

Mobile Technology Laboratory



  • Magic Leap augmented reality headset

  • Microsoft Hololens 2 augmented reality headsets

  • LeapMotion hand-tracking device

  • Valve Index head-mounted display

  • HTC VIVE Pro

Data capture gear

  • Leica ground-based LiDAR scanner

  • Insta360 stereo camera (3D panoramic photography)

  • Theta 360 (2D panoramic photography)


Simulation Suite

Viz Lab

Fri, 04 Mar 2022 10:46:00 -0600 en text/html
Killexams : The Real-Time-Framework

The University of Muenster has been developing the Real-Time Framework (RTF), a novel middleware technology for a high-level development of scalable real-time online services through a variety of parallelization and distribution techniques. RTF is implemented as a cross-platform C++ library and embedded into the service-oriented architecture of the edutain@grid project (FP6) and the software-defined networking architecture of the OFERTIE project (FP7) . It enables real-time services to adapt themselves during runtime to an increased/decreased user demand and preserve QoS by adding resources transparently for the users. The integrated monitoring and controlling facilities offer an open interface for the runtime resource management of ROIA services.

Detailed Information

More detailed information about the RTF and the edutain@grid project can be found under:


The RTF will revolutionize the development of real-time, highly interactive Internet application services. In particular, following novel features are supported:

  • High-level development of scalable real-time interactive applications.
  • Scaling interactive real-time applications like online games through a variety of parallelization and distribution techniques.
  • Monitoring and controlling of real-time applications in service-oriented architectures.
  • Seamless experience for services running on multiple resources.
  • Service adaptation during runtime to a changing user demand.
  • Preserving QoS by adding resources transparently for consumers.
  • Integrated mechanisms for trust and security (authentication, encryption, etc.).

Added Value for Application Developer

Edutain@Grid provides the Real-Time Framework (RTF) to application developers as a C++ library to efficiently design the network and application distribution aspects within the ROIA development. RTF's integrated services enable developers to create ROIA on a high level of abstraction which hides the distributed and dynamic nature of applications, as well as the resource management and deployment aspects of the underlying infrastructure (Grid). The high level of abstraction allows RTF to redistribute ROIA during runtime and transparently monitor their real-time metrics.

RTF provides to the application developer:

  • An automated serialization mechanism, which liberates the developer from the details of network programming.
  • A highly efficient communication protocol implementation over TCP/UDP optimized with respect to the low-latency and low-overhead requirements of ROIA. This implementation is able to transparently redirect communication endpoints to a new resource, if, e.g., parts of the ROIA are relocated to a new grid resource for load-balancing reasons.
  • A single API for using different parallelization approaches: zoning, instancing, replication and their combinations, for a scalable multi-server implementation of ROIA.
  • A fully automated distribution management, synchronization and parallelization of the ROIA update processing.
  • A transparent monitoring of common ROIA metrics that is used by the management and business layer of the edutain@grid system.

Added Value for End-User

  • The use of RTF makes the distribution of the application over multiple servers transparent for the users, e.g., online gamers and participants of e-learning simulations.
  • Security is guaranteed by the authentication and encryption of communication connection.
  • RTF tolerates the use of non-encrypted and non-reliable communication protocols.
Tue, 28 Jul 2020 01:15:00 -0500 en text/html
C90-06A exam dump and training guide direct download
Training Exams List