Memorize and practice these C2140-138 test prep before you go to attempt real exam.

If you are looking for IBM C2140-138 cheat sheet of actual questions to pass the Rational Requisite Pro Exam? is the perfect web place for it. You can download 100% free C2140-138 Exam dumps before you buy full version for your C2140-138 exam practice. C2140-138 VCE exam simulator is the best software to practice your C2140-138 exam.

Exam Code: C2140-138 Practice exam 2022 by team
Rational Requisite Pro
IBM Requisite Topics
Killexams : IBM Requisite courses - BingNews Search results Killexams : IBM Requisite courses - BingNews Killexams : IBM reveals ways to use native source-code management functionality in attacks

IBM’s pen testing group X-Force Red released a new source-code management (SCM) attack simulation toolkit Tuesday, with new research revealing ways to use native SCM functionality in attacks. 

Brett Hawkins of X-Force Red will present the research at Black Hat later in the week. 

Source-code management tools like GitHub are more than just a home to intellectual property. They are a way to install code en masse on every system that code reaches. Two of the most devastating attacks in history - NotPetya and Solarwinds - came out of malicious code inserted into updates, then uploaded to clients. Sloppy SCM users sometimes leave API keys and passwords exposed in code, giving SCM dorks access to other systems; from there, SCM may be connected to other DevOps servers and become a pivot point. 

Click here for more coverage from the Black Hat Conference in Las Vegas.

“There's not really any research out there on attacking and defending these systems,” Hawkins told SC Media. 

At present, most attacks on SCM are by bad actors searching for interesting exposed files, repositories and content. But Hawkins developed more sophisticated attacks leading to privilege escalation, stealth and persistence to use in pen tests. 

That might mean using administrator access to create or duplicate tokens used to access the SCM. Alternatively, on GitHub, that might mean clicking a single button to impersonate users. 

Hawkins jammed his research and reconnaissance tools into SCMKit, the toolkit released Tuesday.  

“There's nothing out there that exists like SCM-Kit right now. It allows you to do a bunch of different attack scenarios including reconnaissance, privilege escalation, and persistence against GitHub Enterprise, GitLab enterprise and Bitbucket,” said Hawkins. “I’m hoping to get some good feedback from the infosec community.”

Mon, 08 Aug 2022 22:00:00 -0500 en text/html
Killexams : API Friction Complicates Hunting for Cloud Vulnerabilities. SQL Makes it Simple

Key Takeaways

  • Developers spend too much time and effort wrangling APIs. When APIs resolve automatically to database tables, it frees devs to focus on working with the data.
  • SQL is the universal language of data, and a great environment in which to model and reason over data that's frictionlessly acquired from diverse APIs.
  • Postgres is ascendant, more than just a relational database it's become a platform for managing all kinds of data.
  • SQL has evolved! With features like common table expressions (CTEs)  and JSON columns, it's more capable than you might think if you haven't touched it in a while.
  • The ability to join across diverse APIs, in SQL, is a superpower that enables you to easily combine information from different sources.

Pen testers, compliance auditors, and other DevSecOps pros spend a lot of time writing scripts to query cloud infrastructure. Boto3, the AWS SDK for Python, is a popular way to query AWS APIs and reason over the data they return.

It gets the job done, but things get complicated when you need to query across many AWS accounts and  regions. And that doesn't begin to cover API access to other major clouds (Azure, GCP, Oracle Cloud), never mind services such as GitHub, Salesforce, Shodan, Slack, and Zendesk. Practitioners spend far too much time and effort acquiring data from such APIs, then normalizing it so the real work of analysis can begin.

What if you could query all the APIs, and reason over the data they return, in a common way? That's what Steampipe is for. It's an open-source Postgres-based engine that enables you to write SQL queries that indirectly call APIs within, across, and beyond the major clouds. This isn’t a data warehouse. The tables made from those API calls are transient; they reflect the live state of your infrastructure; you use SQL to ask and answer questions in real time.

The case study we’ll explore in this article shows how to use Steampipe to answer this question: Do any of our public EC2 instances have vulnerabilities detected by Shodan? The answer requires use of an AWS API to enumerate EC2 public IP addresses, and a Shodan API to check each of them.

In the conventional approach you’d find a programming-language wrapper for each API, learn the differing access patterns for each, then use that language to combine the results. With Steampipe it’s all just SQL. These two APIs, like all APIs supported by Steampipe’s suite of API plugins, resolve to tables in a Postgres database. You query within them, and join across them, using the same basic SQL constructs.

Figure 1 illustrates the cross-API join at the heart of our case study.The aws_ec2_instance table is one of the hundreds of tables that Steampipe builds by calling AWS APIs. The shodan_host table is, similarly, one of a dozen tables that Steampipe constructs from Shodan APIs. The SQL query joins the public_ip_address column of aws_ec2_instance to the ip column of shodan_host.

Before we dive into the case study, let’s look more closely at how Steampipe works. Here’s a high-level view of the architecture.

Figure 2: Steampipe architecture

To query APIs and reason over the results, a Steampipe user writes SQL queries and submits them to Postgres, using Steampipe’s own query console (Steampipe CLI) or any standard tool that connects to Postgres (psql, Metabase, etc). The key enhancements layered on top of Postgres are:

  • Postgres foreign data wrappers 
  • Per-API plugins
  • Connection aggregators

Postgres foreign data wrappers

Postgres has evolved far beyond its roots. Nowadays, thanks partly to a growing ecosystem of extensions that deeply customize the core, Postgres does more than you think. Powerful extensions include PostGIS for geospatial data, pglogical to replicate over Kafka or RabbitMQ, or Citus for distributed operation and columnar storage. 

One class of Postgres extension, the foreign data wrapper (FDW), creates tables from external data. Postgres bundles postgres_fdw to enable queries that span local and remote databases. When Steampipe runs it launches an instance of Postgres that loads another kind of FDW, steampipe-postgres-fdw, an extension that creates foreign tables from APIs with the help of a suite of plugins

These foreign tables typically map JSON results to simple column types: date, text, number. Sometimes, when an API response includes a complex JSON structure such as an AWS policy document, the result shows up in a JSONB column.

Per-API plugins

The plugins are written in Go, with the help of a plugin SDK that handles backoff/retry logic, data-type transformation, caching, and credentials. The SDK enables plugin authors to focus on an essential core task: mapping API results to database tables. 

These mappings may be one-to-one. The aws_ec2_instance table, for example, closely matches the underlying REST API

In other cases it's helpful to build tables that consolidate several APIs. A complete view of an S3 bucket, for example, joins the core S3 API with sub-APIs for ACLs, policies, replication, tags, versioning, and more. Plugin authors write hydrate functions to call these sub-APIs and merge their results into tables.

A basic Steampipe query

Here’s how you’d use Steampipe to list EC2 instances.

  1. Install Steampipe
  2. Install the AWS plugin: steampipe plugin install aws
  3. Configure the AWS plugin

The configuration relies on standard authentication methods: profiles, access keys and secrets, SSO. So authenticating Steampipe as a client of the AWS API is the same as for any other kind of client. With that done, here’s a query for EC2 instances.

Example 1: Listing EC2 instances

from aws_ec2_instance;

| account_id   | instance_id         | instance_state | region    |
| 899206412154 | i-0518f0bd09a77d5d2 | stopped        | us-east-2 |
| 899206412154 | i-0e97f373db22dfa3f | stopped        | us-east-1 |
| 899206412154 | i-0a9ad4df00ffe0b75 | stopped        | us-east-1 |
| 605491513981 | i-06d8571f170181287 | running        | us-west-1 |
| 605491513981 | i-082b93e29569873bd | running        | us-west-1 |
| 605491513981 | i-02a4257fe2f08496f | stopped        | us-west-1 |

The documentation for the referenced foreign table, aws_ec2_instance, provides a schema definition and example queries.

Connection aggregators

The above query finds instances across AWS accounts and regions without explicitly mentioning them, as a typical API client would need to do. That’s possible because the AWS plugin can be configured with an aggregator that combines accounts, along with wildcards for regions. In this example, two different AWS accounts – one using SSO authentication, the other using the access-key-and-secret method – combine as a unified target for queries like select * from aws_ec2_instance

Example 2: Aggregating AWS connections

connection "aws_all" {
  plugin = "aws"
  type = "aggregator"
  connections = [ "aws_1", aws_2" ]

connection "aws_1" {
  plugin    = "aws"
  profile = "SSO…981"
  regions = [ "*" ]

connection "aws_2" {
  plugin    = "aws"
  access_key  = "AKI…RNM"
  secret_key  = "0a…yEi"
  regions = [ "*" ]

This approach, which works for all Steampipe plugins, abstracts connection details and simplifies queries that span multiple connections. As we’ll see, it also creates opportunities for concurrent API access.

Case Study A: Use Shodan to find AWS vulnerabilities

Suppose you run public AWS endpoints and you want to use Shodan to check those endpoints for vulnerabilities. Here’s pseudocode for what needs to happen.

A conventional solution in Python, or another language, requires you to learn and use two different APIs. There are libraries that wrap the raw APIs, but each has its own way of calling APIs and packaging results. 

Here’s how you might solve the problem with boto3.

Example 3: Find AWS vulnerabilities via Shodan, using boto3

import boto3
import datetime
from shodan import Shodan

aws_1 = boto3.Session(profile_name='SSO…981')
aws_2 = boto3.Session(aws_access_key_id='AKI…RNM', aws_secret_access_key='0a2…yEi')
aws_all = [ aws_1, aws_2 ]
regions = [ 'us-east-2','us-west-1','us-east-1' ]

shodan = Shodan('h38…Cyv')

instances = {}

for aws_connection in aws_all:
  for region in regions:
    ec2 = aws_connection.resource('ec2', region_name=region)
    for i in ec2.instances.all():
      if i.public_ip_address is not None:
        instances[] = i.public_ip_address
for k in instances.keys():
     data =[k])
     print(k, data['ports'], data['vulns'])
   except Exception as e:

When APIs are abstracted as SQL tables, though, you can ignore those details and distill the solution to its logical essence. Here's how you use Steampipe to ask and answer the question: "Does Shodan find vulnerable public endpoints in any of my EC2 instances?"

Example 4: Find AWS vulnerabilities using Steampipe

  aws_ec2_instance a
left join
  shodan_host s 
  a.public_ip_address = s.ip
  a.public_ip_address is not null;

| instance_id         | ports    | vulns              |
| i-06d8571f170181287 |          |                    |
| i-0e97f373db42dfa3f | [22,111] | ["CVE-2018-15919"] |

There's no reference to either flavor of API, you just write SQL against Postgres tables that transiently store the results of implicit API calls. This isn’t just simpler, it’s also faster. The boto3 version takes 3-4 seconds to run for all regions of the two AWS accounts I’ve configured as per example 2. The Steampipe version takes about a second. When you’re working with dozens or hundreds of AWS accounts, that difference adds up quickly. What explains it? Steampipe is a highly concurrent API client.

Concurrency and caching

If you've defined an AWS connection that aggregates multiple accounts (per example 2), Steampipe queries all of them concurrently. And within each account it queries all specified regions concurrently. So while my initial use of the query in example 3 takes about a second, subsequent queries within the cache TTL (default: 5 minutes) only take milliseconds. 

It’s often possible, as in this case, to repeat the query with more or different columns and still satisfy the query in milliseconds from cache. That’s because the aws_ec2_instance table is made from the results of a single AWS API call.

In other cases, like the aws_s3_bucket table, Steampipe synthesizes many S3 sub-API calls including GetBucketVersioning, GetBucketTagging, and GetBucketReplication. And it makes those calls concurrently too. Like any other API client, Steampipe is subject to rate limits. But it’s aggressively concurrent so you can quickly assess large swaths of cloud infrastructure. 

Note that when using a table like aws_s3_bucket, it’s helpful to request only the columns you need. If you really want everything, you can select * from aws_s3_bucket. But if you only care about account_id, instance_id, instance_state, and region, then asking explicitly for those columns (as per example 1) avoids unnecessary sub-API calls.

Case Study B: Find GCP vulnerabilities

If your endpoints only live in AWS, example 3 solves the problem neatly. Now let's add GCP to the mix.  A conventional solution requires that you install another API client, such as the Google Cloud Python Client, and learn how to use it. 

With Steampipe you just install another plugin: steampipe plugin install gcp. It works just like the  AWS: calls APIs, puts results into foreign tables that abstract API details so you can focus on the logic of your solution. 

In this case that logic differs slightly. In AWS, public_ip_address is a core column of the aws_ec2_instance table. In GCP you need to combine results from one API that queries compute instances, and another that queries network addresses. Steampipe abstracts these as two tables: gcp_compute_instance and gcp_compute_address. The solution joins them, then joins that result to Shodan as in example 4. 

Example 5: Find GCP vulnerabilities using Steampipe

with gcp_info as (
    gcp_compute_address a
    gcp_compute_instance i
    a.users->>0 = i.self_link
    a.address_type = 'EXTERNAL'
  order by
select as instance_id,
  gcp_info g
left join
  shodan_host s on g.address = s.ip;

This query makes use of two language features that can surprise people who haven't looked at SQL in a long while. The WITH clause is a Common Table Expression (CTE) that creates a transient table-like object. Queries written as a pipeline of CTEs are easier to read and debug than monolithic queries. 

a.users is a JSONB column. The ->> operator addresses its zeroth element. Now that JSON is a first-class citizen of the database, relational and object styles mix comfortably. That's especially helpful when mapping JSON-returning APIs to database tables. Plugin authors can move some pieces of API data into legacy columns and others into JSONB columns. How to decide what goes where? That requires an artful balance of concerns, but the key point is that modern SQL enables flexible data modeling. 

Case Study C: Find vulnerabilities across clouds

If you've got public endpoints in both AWS and GCP, you'll want to combine the queries we've seen so far. And now you know everything you need to know to do that.

Example 6: Find AWS and GCP vulnerabilities

with aws_vulns as (
  —- insert example 4
gcp_vulns as (
  —- insert example 5

select * from aws_vulns
select * from gcp_vulns;

| cloud | instance_id         | ports    | vulns              |
| aws   | i-06d8571f170181287 |          |                    |
| aws   | i-0e97f373db42dfa3f | [22,111] | ["CVE-2018-15919"] |
| gcp   | 8787684467241372276 |          |                    |

We've arranged example 4 and example 5 as a CTE pipeline. To combine them requires nothing more than a good old-fashioned SQL UNION. 

You also now know everything you need to know to expand the pipeline with CTEs for the Oracle or IBM clouds. While you're at it, you might want to bring more than just Shodan's knowledge to bear on your public IP addresses. There are plugins that do reverse DNS lookup, map IP addresses to geographic locations, and check addresses for reported malicious activity. Each of these maps another API that you don't need to learn how to use, models it as a collection of database tables, and enables you to work with it using the same basic SQL constructs you've seen here.

It's just Postgres

We've said that Steampipe isn’t a data warehouse, and that API-sourced tables remain cached for only a short while. The system is optimized for rapid assessment of cloud infrastructure in real time. But Steampipe is just Postgres, and you can use it in all the same ways. So if you need to persist that realtime data, you can.

Example 7: Persist a query as a table

create table aws_and_gcp_vulns as 
  -- insert example 6 

Example 8: Persist a query as a materialized view

create materialized view aws_and_gcp_vulns as 

  -- insert example 6
  -- then, periodically: refresh materialized view aws_and_gcp_vulns

Example 9: Pull query results into Python

import psycopg2, psycopg2.extras
conn = psycopg2.connect('dbname=steampipe user=steampipe host=localhost, port=9193')
cursor = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
cursor.execute('select * from aws_and_gcp_vulns')
for row in cursor.fetchall():
  print(row['cloud'], row['instance-id'], row['vulns'])

Example 10: Connect with psql

psql -h localhost -p 9193 -d steampipe -U steampipe

You can use the same connection details to connect from Metabase, or Tableau, or any other Postgres-compatible tool. 

Bottom line: Steampipe’s API wrangling augments the entire Postgres ecosystem. 

Skip the API grunt work, just do your job

For a DevSecOps practitioner the job might be to inventory cloud resources, check for security vulnerabilities, or audit for compliance. It all requires data from cloud APIs, and acquiring that data in a tractable form typically costs far too much time and effort. With fast and frictionless access to APIs, and a common environment in which to reason over the data they return, you can focus on the real work of doing inventory, security checks, and audits. The requisite API wrangling is a distraction you and your organization can ill afford. Don’t let it get in the way of doing your real jobs, which are plenty hard enough even when you have the data you need.

Wed, 06 Jul 2022 06:28:00 -0500 en text/html
Killexams : Is the Qualified Professional Asset Manager Exemption in Jeopardy?

In late July, the Department of Labor’s Employee Benefits Security Administration announced a proposed amendment to the Class Prohibited Transaction Exemption 84-14.

The PTE is commonly known as the “qualified professional asset manager exemption,” and its basic purpose is to permit various parties who are related to retirement plans covered by the Employee Retirement Income Security Act’s fiduciary provisions to engage in otherwise-barred transactions involving retirement plans and individual retirement account assets. To satisfy the QPAM exemption, the assets in question must be managed by QPAMs that are “independent of the parties in interest” to the plan and that meet specified financial standards, among other conditions.

The proposed amendment includes a number of important changes. As summarized in an EBSA press release, the amendment would better protect plans and their participants and beneficiaries by, among other changes, addressing what EBSA calls “perceived ambiguity” as to whether foreign criminal convictions are included in the scope of the exemption’s ineligibility provision. The amendment further expands the ineligibility provision to include additional types of serious misconduct. Other provisions focus on mitigating potential costs and disruptions to plans and IRAs when a QPAM becomes ineligible due to a conviction or other serious misconduct.

Other changes the amendments would make include an update of the asset management and equity thresholds in the genuine definition of “qualified professional asset manager” and the addition of a standard recordkeeping requirement that the exemption currently lacks. Finally, the amendment seeks to clarify the requisite independence and control that a QPAM must have with respect to investment decisions and transactions.

A Surprise Proposal

Speaking with PLANADVISER about the implications of the amendment, Carol McClarnon, a partner on the tax group of Eversheds Sutherland, calls it “unexpected and worrying.”

“In its press release announcing the proposal, the DOL named six objectives of the proposed changes,” McClarnon says. “On their face, these objectives would appear to be sensible clarifications. However, the genuine conditions being proposed to attain these objectives reveal that the proposal would add significant costs and liability exposure to managers, perhaps even limiting the QPAM exemption as a viable solution.”

As McClarnon observes, before the proposal’s publication last week, the common perception among regulatory experts in the retirement plan industry was that the DOL was unlikely to direct EBSA to take an action like this, as the DOL’s key sub-agency is still operating without a Senate-confirmed head.

“The perception has been that EBSA has put a lot on hold,” McClarnon says. “We did know that the QPAM issue was on EBSA’s radar, but I think it is fair to say that few people expected to see a proposal as ambitious as this to come out right now. Frankly, I was pretty amazed to see this proposal come out.”

A Fundamental Exemption

McClarnon says the QPAM exemption is of fundamental importance to the operation of the modern retirement plan system. This is because so many plans invest in pooled funds and group-style investments with other plans and third parties, and because of the way ERISA defines and treats “parties in interest” to retirement plans or other institutional investors subject to ERISA’s fiduciary provisions. Any party in interest to a given retirement plan may be prohibited from entering into certain transactions with that plan if the transaction will result in additional compensation going to the party in interest, McClarnon notes.

Parties in interest may include, among others, fiduciaries or employees of the plan, any person who provides services to the plan, an employer whose employees are covered by the plan, an employee organization whose members are covered by the plan, a person who owns 50% or more of such an employer or employee organization, or relatives of such persons.

McClarnon explains that certain plan transactions with parties in interest are prohibited under ERISA and are required, without regard to their materiality, to be disclosed in the plan’s annual report to the DOL.

“In practice, the QPAM exemption is used very commonly and for a variety of different purposes,” McClarnon says. “Just imagine if you had, say, 1,000 ERISA-covered retirement plans invested in a given fund. Each of those plans will have a ton of parties in interest. In the most basic terms, what the QPAM exemption does is state that, if you as the asset manager meet the regulatory qualifications, most transactions with these parties of interest in the plan are OK. The new proposal addresses the qualifications in a meaningful way.”

A Burdensome Proposal

Based on her initial reading and discussions with colleagues, McClarnon says the proposal appears to be “so burdensome that it could almost be said to essentially change the availability of the QPAM exemption.” She points out that, in the nearly four decades since the QPAM exemption framework was first established, the financial services world has become far more interconnected.

“In today’s industry, you just have a lot more complexity, with larger conglomerates and highly sophisticated international entities that do business with U.S. retirement plans,” McClarnon says. “The proposed framework, if it is not adjusted after the comment period, will make it very difficult for these types of entities to reliably and efficiently use the QPAM exemption, in my opinion.”

As an example, McClarnon points to the provision in the proposal that declares that the entrance of a QPAM-affiliated person into a non-prosecution agreement will trigger the ineligibility restrictions.

“To me, that’s concerning, because you do not generally speaking admit criminal wrongdoing when entering into such an agreement,” McClarnon says. “There is also a new concept and condition that they have called ‘integrity.’ The proposal says that the DOL can disqualify an entity from using the QPAM exemption based on their own internal examination process and the determination that a QPAM exemption user has not acted with integrity, which they define in the proposal by using various examples and stipulations. In my opinion, there is very little due process recourse for an entity that finds itself in this situation.”

McClarnon says she is perhaps most troubled by the provision in the proposal that seeks to isolate retirement plans from harm if a service provider they work with has its QPAM exemption revoked by EBSA. One must consider the potential consequences of such a framework, she says.

“They don’t want to cause hardships on the plans in cases of disqualification, which makes sense,” McClarnon says. “However, the proposal seems to require that the investment manager sign a written agreement where they declare that, if there is a criminal finding or the ineligibility provision is triggered for some other reason, the QPAM itself has to pick up the full cost of helping the plan make a transition to a new investment. Just imagine the potential cost of this if we are taking about a mega-sized retirement plan or group of plans.”

What Comes Next

McClarnon emphasizes that she understands the DOL and EBSA have a critical job to do in protecting retirement plans and their participants. However, she expects the investment community to respond forcefully to this proposal, and that the comment period could help EBSA constructively refine the proposal.

“The potential for unintended consequences here is so significant,” McClarnon says. “The exemption serves a critical purpose in the current retirement plan system. It is meant to allow plans to be able to invest in things that would otherwise have technical prohibited transaction restrictions on them that do not actually relate to potential operational conflicts of interest. If the proposal is not refined, you could see investment providers not wanting to take on this type of exposure.”

The one element of the proposal she would most want to see changed, McClarnon says, is the contractual requirement related to the QPAM covering the full expense of a fund transition on behalf of a plan.

“I really don’t like the forced contractual requirement,” McClarnon says. “I don’t think EBSA should be telling people these contracts have to be set. I also want to point out that, yes, a giant financial services company may be able to figure out how to make this new framework work, and they may even have the scale and resources to meet the hold-harmless provisions. But a lot of small advisers use this exemption all the time, and in fact it is baked into many standard operating agreements used by all different parties in the industry. It is very common in all kinds of collective investment trust agreements, for example, and we know these investments are becoming more popular. There are just so many traps for the unwitting and the unwary in all this.”

Thu, 04 Aug 2022 20:12:00 -0500 en text/html
Killexams : DevOps in AWS LiveLessons Review and Q&A with Author Paul Duvall

Devops in AWS LiveLessons, published by Addison-Wesley Professional, is a video course aimed at infrastructure developers and Sys Ops engineers who have a goal of creating a fully-automated continuous delivery system in Amazon Web Services (AWS). The 4+ hour course is especially focused on leveraging key features of AWS, such as programmable infrastructure, elasticity and ephemeral resources, while also presenting them in the framework of DevOps best practices and tools. InfoQ has spoken with course author Paul Duvall.

The course sets off by considering the importance of learning the motivation of stakeholders and putting in place the proper communication tools to get access to their assets. Once this is accomplished, according to Duvall, the next step is assessing the current state of the software delivery system. This entails documenting all steps of the current software delivery process as a pre-requisite to its final automation.

The rest of the course is devoted to a very thourough demonstration of all the steps that are required to setup a CI infrastructure in AWS, from the creation of network resources using AWS Virtual Private Cloud and the definition of subnets, route table, security groups and so on, to the set up of a proper CI pipeline across all of its stages. For each stage in the CI pipeline-commit, acceptance, capacity, exploratory, pre-production, and production-Duvall explains what its purpose is and what role it plays.

The final lessons address how to automate the process itself of setting up a new environment and deployment system; processes that are not specific to any given stage in the pipeline, such as monitoring and logging; and ongoing activities that can be helpful for the whole team.

InfoQ has interviewed Paul Duvall.

InfoQ: Hi Paul. Could you please shortly introduce yourself and describe your experience with Continuous Delivery?

I’m the co-founder of Stelligent and the primary author of the book on Continuous Integration. Our business objective at Stelligent is to implement Continuous Delivery solutions in Amazon Web Services for our customers so that they can release software whenever they choose to do so. I’ve been practicing Continuous Integration since the early 2000s on software development projects and began practicing what is now referred to as Continuous Delivery a few years after that. Like most things, it’s been a bit of an evolution toward more self-service, more automation and a better user experience for those who are developing and delivering software systems.

I’ve got a particular passion for automating repetitive and error-prone processes so that we humans can focus on higher-level activities. I find that the more I ask myself the question, “how can I systematize this process and make it self-service?”, the more I test the limits of what’s possible with automation. I believe we’re still at “Day 1” when it comes to automation around the software delivery process.

InfoQ: Can you briefly define Continuous Delivery and what benefits it can bring to an organization?

CD provides the benefits of having always-releasable software so decisions to release software are driven by business needs rather than operational constraints. Moreover, CD reduces the time between when a defect is introduced and when it’s discovered and fixed. CD embraces the DevOps mindset which, at its core, is about increasing collaboration and communication across an organization and reducing team silos that stifle this collaboration.

InfoQ: When should an organization adopt a continuous delivery model?

Organizations should implement a continuous delivery model when they have a need to regularly release software and/or they want to reduce the heroic efforts when they do release software. Releasing software less tends to make the release process more complex and prone to error, which calls for heroic efforts that can often otherwise be avoided. In other words, even if there isn’t a business need to release once a month, there’s often a compelling cost and quality of life motivation to move toward Continuous Delivery regardless of how often releases occur.

Organizations that haven’t incorporated Continuous Delivery and practices into their teams usually experience at least some of the following symptoms:

  • Team Silos - Teams segmented by functional area
  • Manual Instructions - Using manual instructions as the canonical source for infrastructure and other configuration
  • Tribal Knowledge - Knowledge is shared from one person to others and not formally institutionalized
  • Email - Managing release activities through emails
  • Different Tools Across Teams: Different teams across the lifecycle use different tools to deliver software
  • Issues/Tickets - Using issues/tickets as a means of communicating and assigning build, test, deployment and release-related tasks
  • Meetings - Meetings are used as a weak attempt to get different teams on the “same page”

These Symptoms Often Generate These Results:

  • Errors - When a full system isn’t integrated frequently, environments throughout the lifecycle are different, often leading to errors
  • Increased costs - Errors lead to increased costs
  • Delays - Weeks to get access to even just development and testing environments; Increased wait times as teams attempt to communicate across team silos
  • Release less frequently - Release during off-hours, weekends, calling for hero efforts

InfoQ: How would you describe the kind of mentality change that should go along with adopting a CD system?

On an individual level, people need to be moving away from the “it’s not my job” mentality and move toward a “systems thinking” mentality. People should be continually asking themselves “how will this change affect the rest of the system?” This kind of systems thinking should manifest into more holistic thinking for the benefit of the overall system.

When I refer to the whole team, I mean a cross-functional team that consists of people who are 100% dedicated to the project. This team’s external team dependencies should be minimal and they should have the ability to release the software whenever there’s a business decision to do so without going through a separate team.

When I refer to the whole process, I’m referring to a heuristic that we’ve found to work well when thinking about “systematizing” a process. This heuristic is: document, test, code, version and continuous. This translates to:

  1. Document - Document the process with the idea that you will automate away most of the steps you’re documenting. We refer to this as “document to automate”.
  2. Test - Write automated tests that will verify that the behavior you’re automating is working.
  3. Code - Codify the behavior based on the tests and/or documentation.
  4. Version - Commit the code and tests to the version-control repository.
  5. Continuous - Once the code and tests are versioned, ensure it can be run headless (e.g. a single command taking into account any necessary dependencies) and then configured to run as part of a single path to production through a continuous integration system.

When people think in terms of the whole system, they need to extend beyond just the application/service code, and include the configuration, infrastructure, data and all the supporting and dependent resources such as build, tests, deployments, binaries, etc. All of these components need to be documented, tested, codified, versioned and run continuously with each and every commit.

InfoQ: In your LiveLessons, you stress the importance of a number of initial steps, such as identifying all stakeholders, performing a discovery process, setting up communication management tools, moving to a cross-functional organization and so on. Could you elaborate on the importance of those steps?

As an engineer, it’d be easy for me to gloss over these types of activities for the more interesting hands-on coding exercises. While other lessons in the video do focus on the hands-on coding, I was seeking to show viewers all of the steps that we typically go through at Stelligent when implementing CD with a customer. For example, we find that when teams don’t take the time to determine the current state of their processes, they perform unnecessary sub-optimization on things that don’t provide the most benefit.

For instance, let’s say your process from code commit to production takes three months and there are multiple days just waiting for people downstream to just click or approve something, why would you first spend days or weeks optimizing the process time of the one of the steps in your delivery process? Instead, you might spend some time determining why there are multi-day bottlenecks.

This is why it’s essential to do things like value-stream mapping so that everyone gets on the same page about the current state (either on the software project under development or other projects in the organization if it’s a new development effort) so that you’re spending time on optimizing the most critical bottlenecks first.

InfoQ: Among all the steps you go through to set up a CD solution within an organization, what are the critical ones in order to get CD right?

  • Value-stream mapping - Create a left-to-right illustration of the current state containing all the steps in your process including the overall lead time, process times and wait times. Illustrate the anticipated future state.
  • Self Service - create fully-automated pull-based systems so that any authorized team member can get environments, tools or other resources without human intervention.
  • Cross-functional teams - for example: application developers, testing/QA, DBAs, operations, business analysts as part of the same small team
  • Feedback - get the right information to the right people at the right time. This may include real-time information radiators, emails and other communication mechanisms. Instill “stopping the line” into the team culture so that errors are fixed soon after they’re introduced.

InfoQ: You focus on AWS in your LiveLessons. What is the advantage of using it to implement an organization’s CD system?

At Stelligent, we’ve focused exclusively on building CI/CD solutions and have been working in the cloud since 2009. We’d worked with multiple cloud providers in the first year or so. For one customer, we used and analyzed something like 15 cloud providers, at the time. We determined that AWS was the most feature-rich, stable, cost-effective solution for their needs and we decided that it was the best solution for our needs.

InfoQ: How would AWS stack up in a hypothetical Continuous Delivery Contest against its main competitors, such as Microsoft Azure, IBM SmartCloud, Google Cloud Platform, and so on?

For what it’s worth, I’d include the Google Cloud Platform before I’d include the IBM SmartCloud. AWS is far ahead of any of the other infrastructure-as-a-service providers on the market. For example, there’s no real equivalent to AWS OpsWorks - particularly in the context of an integrated IaaS provider. Because AWS has a large suite of services, it provides and then exposes them through AWS CloudFormation, and the number of resources you can automatically provision is much more than any other provider.

Moreover, there are no genuinely comparable solutions to the new Continuous Delivery-focused services at AWS such as AWS CodeDeploy, AWS CodePipeline and AWS CodeCommit and several other enterprise-focused services.

Paul M. Duvall is the Chairman and CTO at Stelligent. Stelligent is an expert in implementing Continuous Delivery solutions in Amazon Web Services (AWS) and has been working with AWS since 2009. Paul is the principal author of Continuous Integration: Improving Software Quality and Reducing Risk (Addison-Wesley, 2007) and a 2008 Jolt Award Winner. Paul is an author of many other books and publications including DevOps in AWS (Addison-Wesley, 2014) and two IBM developerWorks series on courses around automation, DevOps and Cloud Computing. He is passionate about software delivery and the cloud and actively blogs here.

Thu, 11 Nov 2021 09:22:00 -0600 en text/html
Killexams : DOL Proposes Qualified Professional Asset Manager Exemption Changes

On Wednesday, the U.S. Department of Labor’s Employee Benefits Security Administration announced a proposed amendment to the Class Prohibited Transaction Exemption 84-14.

As the DOL’s summary announcement about the proposal notes, the exemption at issue is commonly referred to as the “qualified professional asset manager exemption.” The amendment’s stated purpose is to ensure the exemption continues to protect plans, participants, beneficiaries, individual retirement account owners and their interests.

“The QPAM Exemption permits various parties who are related to plans to engage in transactions involving plan and individual retirement account assets if, among other conditions, the assets are managed by QPAMs that are independent of the parties in interest and that meet specified financial standards,” the announcement states. “Since the exemption’s 1984 creation, substantial changes have occurred in the financial services industry. These changes include industry consolidation and the increasing global reach of financial services institutions in their affiliations and investment strategies, including those for plan assets.”

The amendment would protect plans and their participants and beneficiaries by, among other changes, addressing “perceived ambiguity” as to whether foreign convictions are included in the scope of the exemption’s ineligibility provision. The amendment further expands the ineligibility provision to include additional types of serious misconduct, and it focuses on mitigating potential costs and disruption to plans and IRAs when a QPAM becomes ineligible due to a conviction or participates in other serious misconduct.

Other changes the amendment would make include an update of the asset management and equity thresholds in the genuine definition of “qualified professional asset manager” and the addition of a standard recordkeeping requirement that the exemption currently lacks. Finally, the amendment seeks to clarify the requisite independence and control that a QPAM must have with respect to investment decisions and transactions. 

In the DOL’s announcement, Ali Khawar, acting assistant secretary for employee benefits security at EBSA, called the proposed amendment “overdue.”

“The proposed amendment provides important protections for plans and individual retirement account owners by expanding the types of serious misconduct that disqualify plan asset managers from using the exemption, and by eliminating any doubt that foreign criminal convictions are disqualifying,” Khawar said. “The exemption also provides a one-year period for a disqualified financial institution to conduct an orderly wind-down of its activities as a QPAM, so plans and IRA owners can terminate their relationship with an ineligible asset manager without undue disruption.”

The full text of the rule amendment proposal is here.

Wed, 27 Jul 2022 07:02:00 -0500 en text/html
Killexams : Information Management

MSc PG Certificate PG Diploma

2022 start September 

Information School, Faculty of Social Sciences

Prepare for your future career with the world’s number one school for Library and Information Management (QS Rankings 2021). Learn the core concepts and principles related to the systematic design and implementation of information, knowledge and data environments in organisational and networked contexts. The MSc and PG Diploma awards are CILIP accredited.

Course description

Ready yourself for a wide variety of organisational and consultancy roles that demand expertise in information and knowledge management. The emphasis of the programme is on developing your knowledge, skills and experience of design, implementation, management and governance effective information environments. This includes examining their purposes, functions and processes and mediating between information users, resources and systems in both organisational and networked contexts.

You'll also acquire practical experience in the use of new information and communications technologies and develop personal awareness and skills relevant to information management in a variety of workplace roles.

You'll learn basic foundations of information management concerning the systematic acquisition, storage, retrieval, processing and use of data, information and knowledge, in support of decision-making, sense-making and organisational goals.

If you have two or more years' relevant work experience in the information sector and wish to study for a higher degree, you may be interested in our Professional Enhancement programme. The programme is designed for people already in work who want to further their careers and allows greater freedom in module choice in recognition of your existing expertise.


The MSc and PG Diploma programmes are accredited by the Chartered Institute of Library and Information Professionals (CILIP).


A selection of modules are available each year - some examples are below. There may be changes before you start your course. From May of the year of entry, formal programme regulations will be available in our Programme Regulations Finder.

You’ll need 180 credits to get a masters degree, with 60 credits from core modules, 60 credits from optional modules and a dissertation (including dissertation preparation) worth 60 credits.

Core modules:

Information and Knowledge Management

This module addresses both the oretical and practical aspects ofmanaging information and knowledge in organisations, enqabling you to engagecritically with a number of current issues and debates in this field. It isdesigned around case studies of well known organisations and involves thedevelopment of skills in analysis and formulation of strategies fororganisational development. Assessed work focuses also on skills in reviewingthe domain and on the development of conceptual models for information andknowledge management.

15 credits
Information Retrieval: Search Engines and Digital Libraries

Information Retrieval (IR) systems are ubiquitous as searching has become a part of everyday life. For example, we use IR systems when we search the web, look for resources using a library catalogue or search for relevant information within organisational repositories (e.g. intranets). This module provides an introduction to the area of information retrieval and computerised techniques for organising, storing and searching (mainly) textual information items.

Techniques used in IR systems are related to, but distinct from, those used in databases. The emphasis for IR systems is to find documents that contain relevant information and separate these from a potentially vast set of non-relevant documents. The content of the module falls into two main areas: (1)  fundamental concepts of IR (indexing, retrieval, ranking, user interaction and evaluation) and (2) applying IR in specific contexts, bias in information retrieval, and dealing with non-textual and non-English content (multimedia and multilingual IR).

15 credits
Information Systems in Organisations

This module integrates courses of organisation, management, and information systems, with an aim to offer the students an integrated set of concepts and tools for understanding information systems in organisations. During this module students will explore basic management and organisational theories and examine the impact of information systems on organisations. This course introduces key concepts which will be explored further in other modules on the information Management and Information Systems programmes.

15 credits
Information Governance and Ethics

This module explores a) the emergence of information and data as an economic resource; b) the governance challenges and ethical issues arising from organizations' systematic capture, processing, and use of information and data for organizational goals, e.g. value, risk, accountability, ownership, privacy etc; c) governance, ethical, legal and other frameworks relevant to the capture, processing and use of information and data within organizational and networked contexts; and d) technologies and techniques used in the governing and governance of information and data. Case examples from a number of domains, e.g. business, government, health, law, and social media illustrate the courses investigated.

15 credits
Research Methods and Dissertation Preparation

This module assists students in the identification of, and preparation of a dissertation proposal. Students will: learn about: on-going research in the School; identify and prepare a dissertation proposal; carry out a preliminary literature search in the area of the dissertation research topic; and be introduced to the use of social research methods and statistics for information management.

15 credits

This module enables students to carry out an extended piece of work on an Information School approved topic, so that they can explore an area of specialist interest to them in greater depth. Students will be supported through tutorials with a project supervisor, will apply research methods appropriate to their topic, and implement their work-plan to produce an individual project report. Students will already have identified a suitable Topic and designed a project plan in the pre-requisite unit Research Methods and Dissertation Preparation.

45 credits

Optional modules - one from:

Introduction to Programming

This module introduces students with little or no programming experience to the general purpose programming language Python. Python is popular and easy to learn for developing a wide range of information systems applications. The skills and understandings required to program in Python are valued by organisations and transfer to most other programming languages.

15 credits
Website Design and Search Engine Optimisation

This module aims to teach the key principles of search engine optimised (SEO) and user-centred website design; including areas of search optimised and accessible design, content strategy, requirements analysis, user experience, and Web standards compliance. Students will have opportunities to apply this knowledge to authentic design problems and develop web authoring skills valued by employers. In particular, students will be introduced to the latest web mark-up languages (currently HTML5 and CSS3) and issues surrounding long-term search ranking, globalisation, internationalisation and localisation - with a business focussed context.

15 credits
Information Systems Modelling

To consider the role of information modelling within the organisation and provide an appreciation of the rigorous methods that are needed to analyse, design, develop and maintain computer-based information systems. The course is intended to provide an introduction to information modelling techniques. Students gain experience in applying the wide range of systems analysis methods. Students cover courses including: soft systems analysis; structured systems analysis methodologies; business process modelling; data flow modelling and object-oriented approaches (e.g. RUP/UML).

15 credits

Optional modules - three from:

Information Visualisation for Decision-Making

Organisations are nowadays challenged by the volume, variety, and speed of data collected from systems in internal and external environments. This module will focus on i) theoretical and methodological frameworks for developing visualisations; ii) how visualisations can be used to explore and analyse different types of data; iii) how visualisations can turn data into information that can be used to offer critical insights and to aid in decision-making by managers and others. Its module content includes: how to design visualisations, how to create and critique different visualisations, as well as good practices in information visualisation and dashboard design.

15 credits
Information Systems Project Management

This module aims to provide a broad understanding of the fundamentals of project management as they apply to the development of Information Systems (IS). The module uses a flexible approach combining face-to-face seminars with web-based learning material. The module will begin with an overview of the principles involved in IS project management; followed by a discussion of IS development methodologies and their different characteristics and specialisms. The rest of the module will discuss the requirements for various project control activities, including estimating development resources, risk management, guidelines for system quality assurance, and various project control techniques that have been developed in latest years. The module will culminate with a review of human resource management issues.

15 credits
Digital Business

The module addresses both theoretical and practical aspects of digital business. The module will cover the latest business trends and business models adopted by ecommerce companies so that students are able to recognise and relate to the current practice in business.  The module aims to equip the students with theoretical and business knowledge and entrepreneurial skills to understand and manage new ways of doing business in the digital economy.

15 credits
Researching Social Media

The module will examine the key theoretical frameworks and methods used in social media studies. Students will explore the following questions: 1) What can be learnt about society by studying social media? 2) How should researchers construct ethical stances for researching sites such as Facebook and Twitter? 3) What are the traditional and digital research methods and tools that can be applied to conduct research on social media? 4) What are the strengths and weaknesses of these methods?

15 credits
ICTs, Innovation and Change

This module aims at examining and exploring how organizations and human activity systems cope with change due to the new implementation or updating of Information Systems and Information and Communication Technologies (ICTs). This change occurs in complex social environments and has cultural, political, structural and ethical impacts that need to be carefully managed. The module will examine and explore how both managers and Information Systems practitioners can be better prepared for the unpredictability, unintended outcomes and possible harmful consequences of change caused by the introduction or update of Information Systems and ICTs. Therefore, the module aims at providing an understanding of both approaches and techniques for the management of this change.

15 credits
Database Design

Effective data management is key to any organisation, particularly with the increasing availability of large and heterogeneous datasets (e.g. transactional, multimedia and geo-spatial data). A database is an organised collection of data, typically describing the activities of one or more organisations and a core component of modern information systems. A Database Management System (DBMS) is software designed to assist in maintaining and utilising large collections of data and becoming a necessity for all organisations. This module provides an introduction to the area of databases and database management, relational database design and a flavour of some advanced courses in current database research that deal with different kinds of data often found within an organisational context. Lectures are structured into three main areas:¿An introduction to databases¿The process of designing relational databases¿Advanced courses (e.g. data warehouses and non-relational databases)The course includes a series of online tasks with supporting 'drop in¿ laboratories aimed at providing you with the skills required to implement a database in Oracle and extract information using the Structured Query Language (SQL).

15 credits
Academic and Workplace Library, Information and Knowledge Services

This module introduces students to the purposes, functions and practices of a range of academic research and other specialist library and information/knowledge services in the public and private sectors. It considers the challenges of delivering and developing services in a demanding, fast-moving and complex environment. Lectures are combined with sector-based case studies presented by visiting speakers drawn from diverse backgrounds giving extensive opportunities for interaction with specialist practitioners.

15 credits
User-Centred Design and Human-Computer Interaction

Interface design and usability are central to the experience of interacting with computers. The module introduces usability principles and the design process for interactive systems exploring four major themes. Firstly, user psychology and cognitive principles underlying interface design. Secondly, user interface architectures, modes of interaction, metaphors, navigational structures. Thirdly, the user interface design process including task analysis, modelling constructs and prototyping techniques. Fourthly, the evaluation of user interfaces covering concepts of usability, goals and types of evaluation. The module focus is on the underlying principles of HCI and user-centred design approach with practical sessions to demonstrate these principles.

15 credits
Archives and Records Management

This module prepares students for roles within archives and records management, with emphasis on archives.  Students will develop knowledge and awareness of key theories and practices in archives and records management. The module introduces students to some of the principal issues surrounding the provision of archives and records management services and the challenges of meeting user needs within an organisational context. In addition to presenting the fundamental principles the second part of the module focuses on specific courses of interest, such as: community archiving, digital preservation, web archiving and oral history collecting.

15 credits

Other courses

Postgraduate Certificate requires a total of 60 credits
Postgraduate Diploma requires a total of 120 credits

The content of our courses is reviewed annually to make sure it's up-to-date and relevant. Individual modules are occasionally updated or withdrawn. This is in response to discoveries through our world-leading research; funding changes; professional accreditation requirements; student or employer feedback; outcomes of reviews; and variations in staff or student numbers. In the event of any change we'll consult and inform students in good time and take reasonable steps to minimise disruption. We are no longer offering unrestricted module choice. If your course included unrestricted modules, your department will provide a list of modules from their own and other subject areas that you can choose from.

Open days

An open day gives you the best opportunity to hear first-hand from our current students and staff about our courses. You'll find out what makes us special.

Upcoming open days and campus tours


  • 1 year full-time
  • 2 years part-time
  • 3 years part-time


A variety of teaching methods are used, combining lectures from academic staff and professional practitioners with seminars, tutorials, small-group work and computer laboratory sessions.

There's a strong emphasis on problem-solving and individual aspects of learning, with the expectation that you’ll engage in independent study, reading and research in support of your coursework.

Teaching consists of two 15-week semesters, after which you’ll write your dissertation.


Assessments vary depending on the modules you choose but may include essays, report writing, oral presentations, in-class tests and group projects.

There's also a dissertation of 10–15,000 words, which provides the opportunity, under one-to-one supervision, to focus on a Topic of your choice. You may choose to carry out your dissertation with an external organisation, for instance if you are a Professional Enhancement student, your project could be directly related to your work situation. In the past, students who have carried out such dissertations have welcomed the opportunity to tackle real-life problems.

Your career

We're the leading school of our kind in the UK and have a global reputation for excellence. Our MSc develops the skills you need to work in the fast-paced and evolving field of information management. After completing the course, you'll be equipped for a career in industry or research.

Our graduates have gone on to careers that include:

  • Project Manager, IBM
  • Metadata Specialist, The British Library
  • Wealth Planning Manager, China Merchants Bank
  • IT Director, Lloyds Banking Group
  • Business Analyst, Citibank
  • Director of Communications, Harvard University
  • Head of Library and Information Services, Foreign and Commonwealth Office
  • Vice-President, Goldman Sachs Japan Services Co.
  • Product Engineer, BenQ
  • Management Trainee,

Career pathways

Our modules prepare you for a range of career pathways, including the following. If you're interested in one of these career pathways, your tutors will recommend the most suitable module choices.

Digital Business

This involves managing and delivering products and services. Possible job titles include:

  • e-commerce manager
  • digital product/service delivery manager
  • digital marketer
  • digital product owner

Information Technology

This involves working with organisations to make improvements using information technologies. Possible job titles include:

  • business analyst
  • systems analyst
  • IT project manager
  • database administrator
  • operational researcher

Information Science

Information scientists manage an organisation's information resources and make sure they're readily available. Possible job titles include:

  • information manager
  • information officer
  • knowledge manager
  • management information analyst
  • information governance officer
  • business intelligence officer
  • reporting analyst
  • information analyst
  • data privacy analyst

Read more about careers in information

PhD student and Librarianship MA graduate Itzelle Medina Perea shares her experiences of studying at the Information School.


We invested a six-figure sum to create leading-edge, flexible and technology-rich facilities for learning and teaching that are consistent with our reputation as a modern, highly respected and world-leading school. The new facilities include the iLab, the iSpace and a computer laboratory for collaborative learning.

We have three research labs on-site with workspace for over 80 researchers and a dedicated IT support team to assist with technical queries and requests. 

We also have a number of other newly-refurbished spaces which are available to all our researchers.

More about the Information School facilities.


The University of Sheffield Information School is ranked number one in the world for library and information management in the QS World University Rankings by subject 2021. These rankings are based upon academic reputation, employer reputation and research impact.

The school has been at the forefront of developments in the information field for more than fifty years. The subject is characterised by its distinctive, interdisciplinary focus on the interactions between people, information and digital technologies. It has the ultimate goal of enhancing information access, and the management, sharing and use of information, to benefit society.

When you come to study with us you'll be an integral part of our research culture. The school is your home and we pride ourselves on the friendliness and helpfulness of our staff.

We offer an outstanding academic education through a wide range of taught postgraduate degrees which embed the principles of research-led teaching.

When you join any of our degree programmes you'll develop a critical understanding of current issues in library and information management. You'll benefit from being taught by staff who are undertaking leading-edge research and who have many links with industry.

As part of our mission to provide world-quality university education in information, we aim to inspire and help you pursue your highest ambitions for your academic and professional careers.

Entry requirements

Main course

You'll need at least a 2:1 in any subject.

You do not need work experience.

Professional Enhancement

This is a different route to the main course. It's aimed at those who already have relevant work experience.

To apply for this course you need either:

  • an undergraduate degree in any subject discipline and at least 2 years' relevant work experience, or
  • an undergraduate degree in any subject together with an acceptable relevant professional qualification and at least 2 years' relevant work experience, or
  • an undergraduate degree in any subject area, and at least 5 years' relevant work experience.

If you do not have an undergraduate degree but have other qualifications and substantial relevant work experience you may be considered for entry onto the Postgraduate Certificate or Diploma courses.

Overall IELTS score of 6.5, with a minimum of 6.0 in each component or equivalent.

Pathway programme for international students

If you're an international student who does not meet the entry requirements for this course, you have the opportunity to apply for a pre-masters programme in Business, Social Sciences and Humanities at the University of Sheffield International College. This course is designed to develop your English language and academic skills. Upon successful completion, you can progress to degree level study at the University of Sheffield.

If you have any questions about entry requirements, please contact the department.


You can apply for postgraduate study using our Postgraduate Online Application Form. It's a quick and easy process.

Applications close on Friday 5 August 2022 at 5pm.

Apply now

Any supervisors and research areas listed are indicative and may change before the start of the course.

Our student protection plan

Recognition of professional qualifications: from 1 January 2021, in order to have any UK professional qualifications recognised for work in an EU country across a number of regulated and other professions you need to apply to the host country for recognition. Read information from the UK government and the EU Regulated Professions Database.

Thu, 01 Oct 2020 03:20:00 -0500 en text/html
Killexams : Siemens D500 Powder Diffractometer


The Siemens D500 powder diffractometer is configured with an graphite monochrometer and IBM compatible workstation. This system is primarily used to satisfy the needs of undergraduate research and is housed in the undergraduate wing of the M&M building.

A close up of the XRD D500 instrument.


This theta-theta configured goniometer permits automated collection of intensity vs. scattering angle scans. Data reduction schemes include unit cell determination, pattern indexing, precision lattice parameter determination. Crystalline compounds can be identified through indexing with the built-in JCPDS powder diffraction database.



MSE 3120 - Materials Characterization I

Fundamentals of microstructural and chemical characterization of materials. Examines the physical principles controlling the various basic characterization techniques. courses include crystallography, optics, optical and electron microscopy, and diffraction. Laboratory focuses on proper operational principles of characterization equipment, which includes optical and other microscopy methods and various diffraction techniques.

  • Credits: 4.0
  • Lec-Rec-Lab: (2-1-3)
  • Semesters Offered: Spring
  • Pre-Requisite(s): MY 2110 or MSE 2110
Mon, 25 Feb 2019 23:20:00 -0600 en text/html
Killexams : A Short History Of AI, And Why It’s Heading In The Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Neural Networks

As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding. Neural networks had been around since the 1960s, but were actively squelched by the AI researches. Starved of resources, not much was heard of neural nets until it became obvious that AI was not living up to the hype. Unlike computers – what original AI was based on – neural networks do not have a processor or a central place to store memory.

Deep Blue computer
Deep Blue computer

Neural networks are not programmed like a computer. They are connected in a way that gives them the ability to learn its inputs. In this way, they are similar to a mammal brain. After all, in the big picture a brain is just a bunch of neurons connected together in highly specific patterns. The resemblance of neural networks to brains gained them the attention of those disillusioned with computer based AI.

In the mid-1980s, a company by the name of NETtalk built a neural network that was able to, on the surface at least, learn to read. It was able to do this by learning to map patterns of letters to spoken language. After a little time, it had learned to speak individual words. NETtalk was marveled as a triumph of human ingenuity, capturing news headlines around the world. But from an engineering point of view, what it did was not difficult at all. It did not understand anything. It just matched patterns with sounds. It did learn, however, which is something computer based AI had much difficulty with.

Eventually, neural networks would suffer a similar fate as computer based AI – a lot of hype and interest, only to fade after they were unable to produce what people expected.

A New Century

The transition into the 21st century saw little in the development of AI. In 1997, IBMs Deep Blue made brief headlines when it beat [Garry Kasparov] at his own game in a series of chess matches. But Deep Blue did not win because it was intelligent. It won because it was simply faster. Deep Blue did not understand chess the same way a calculator does not understand math.

Example of Google’s Inceptionism. The image is taken from the middle of the hierarchy during visual recognition.

Modern times have seen much of the same approach to AI. Google is using neural networks combined with a hierarchical structure and has made some interesting discoveries. One of them is a process called Inceptionism. Neural networks are promising, but they still show no clear path to a true artificial intelligence.

IBM’s Watson was able to best some of Jeopardy’s top players. It’s easy to think of Watson as ‘smart’, but nothing could be further from the truth. Watson retrieves its answers via searching terabytes of information very quickly. It has no ability to actually understand what it’s saying.

One can argue that the process of trying to create AI over the years has influenced how we define it, even to this day. Although we all agree on what the term “artificial” means, defining what “intelligence” actually is presents another layer to the puzzle. Looking at how intelligence was defined in the past will give us some insight in how we have failed to achieve it.

Alan Turing and the Chinese Room

Alan Turing, father to modern computing, developed a simple test to determine if a computer was intelligent. It’s known as the Turing Test, and goes something like this: If a computer can converse with a human such that the human thinks he or she is conversing with another human, then one can say the computer imitated a human, and can be said to possess intelligence. The ELIZA program mentioned above fooled a handful of people with this test. Turing’s definition of intelligence is behavior based, and was accepted for many years. This would change in 1980, when John Searle put ai_02forth his Chinese Room argument.

Consider an English speaking man locked in a room. In the room is a desk, and on that desk is a large book. The book is written in English and has instructions on how to manipulate Chinese characters. He doesn’t know what any of it means, but he’s able to follow the instructions. Someone then slips a piece of paper under the door. On the paper is a story and questions about the story, all written in Chinese. The man doesn’t understand a word of it, but is able to use his book to manipulate the Chinese characters. His fills out the questions using his book, and passes the paper back under the door.

The Chinese speaking person on the other side reads the answers and determines they are all correct. She comes to the conclusion that the man in the room understands Chinese. It’s obvious to us, however, that the man does not understand Chinese. So what’s the point of the thought experiment?

The man is a processor. The book is a program. The paper under the door is the input. The processor applies the program to the input and produces an output. This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing. It’s just following instructions. The intelligence lies with the author of the book or the programmer. Not the man or the processor.

A New Definition of Intelligence

In all of mankind’s pursuit of AI, he has been, and actively is looking for behavior as a definition for intelligence. But John Searle has shown us how a computer can produce intelligent behavior and still not be intelligent. How can the man or processor be intelligent if does not understand what it’s doing?

All of the above has been said to draw a clear line between behavior and understanding. Intelligence simply cannot be defined by behavior. Behavior is a manifestation of intelligence, and nothing more. Imagine lying still in a dark room. You can think, and are therefore intelligent. But you’re not producing any behavior.

Intelligence should be defined by the ability to understand. [Jeff Hawkins], author of On Intelligence, has developed a way to do this with prediction. He calls it the Memory Prediction Framework. Imagine a system that is constantly trying to predict what will happen next. When a prediction is met, the function is satisfied. When a prediction is not met, focus is pointed at the anomaly until it can be predicted. For example, you hear the jingle of your pet’s collar while you’re sitting at your desk. You turn to the door, predicting you will see your pet walk in. As long as this prediction is met, everything is normal. It is likely you’re unaware of doing this. But if the prediction is violated, it brings the scenario into focus, and you will investigate to find out why you didn’t see your pet walk in.

This process of constantly trying to predict your environment allows you to understand it. Prediction is the essence of intelligence, not behavior. If we can program a computer or neural network to follow the prediction paradigm, it can truly understand its environment. And it is this understanding that will make the machine intelligent.

So now it’s your turn. How would you define the ‘intelligence’ in AI?

Sun, 24 Jul 2022 12:00:00 -0500 Will Sweatman en-US text/html
Killexams : Patenting cannabis in India and beyond

The executive vice president of partnerships and acquisitions at the NPE explains how his company’s deal with Intel came to be

South Korean lawyers welcome the trademark guidelines but say the appellate board, courts, and other IP offices may not necessarily agree with the KIPO

Lawyers for Craig Wright will seek approval for expert evidence to help the England and Wales High Court understand how autism affects his character

IP counsel say rude judges can dent their confidence but that the effect on clients should not be underestimated

Sources say the Supreme Court’s decision to take on Sky v SkyKick puts uncertainty back into the mix, just when IP owners thought they knew what was what

Charles Feng, partner at East & Concord in Beijing, explains why filing for a trademark early is still a brand’s best bet

Counsel at IBM, Novartis, BMS and four other companies delve into whether the new Section 101 bill addresses their concerns

Clearance searches are especially important when counsel can’t rely on the USPTO’s opinion before key deadlines, say sources

VLSI case halted in Delaware; Netflix sues Bridgerton rip-off; Ex-GSK scientist escapes damages; US Copyright Office debuts new software; Abbvie scores Humira patent thicket win; Russia tables bill on illegal blocking of copyrighted content

An England and Wales High Court judgment over a disclosure error shows why law firms must never play a distant role when advising clients

Mon, 11 Jul 2022 22:38:00 -0500 en text/html
Killexams : Global Data Center Market to Reach $311.63 Billion | Automation, Blockchain, and AI to Change Face of Data Center Market

SkyQuest Technology Consulting Pvt. Ltd.

Global data center market is valued at USD 220 billion in 2021 and is projected to attain a market size of USD 311.63 billion by 2028 at a CAGR of 5.10% during the forecast period (2022–2028).

Westford, USA, July 18, 2022 (GLOBE NEWSWIRE) -- As the amount of technology available in today's world skyrockets, professional data centers have been one of the industries that has been significantly impacted. The data center skills gap, or lack of skilled dedicated professionals, continues to be an intricate concern across global data center market. As per the current estimates and future forecast, the data center market is booming, impacting not just the computing sector, but other sectors as well. From the adoption of a hybrid approach that leverages co-locations and new technologies, to the need for an in-house data center staff and how widespread this skill gap continues to ensue is likely to remain a hot Topic in the years to come.

Top 4 Trends That are Transforming the Data Center Market

Building out hyperscale data centers: Data centers are continuing to grow in size, and larger facilities are becoming more common. This is mainly due to the demand for increased capacity and improved performance. As a result, more and more companies are building out hyperscale data centers, which consist of hundreds of millions or even billions of dollars’ worth infrastructure.

Adoption of blockchain technology: Blockchain technology is becoming increasingly popular in the data center world. This is due to its potential for creating safer and more efficient systems. Additionally, blockchain could play a role in replacing many traditional systems used in data center management.

Embracement of artificial intelligence (AI): AI is Already being used in a number of different ways in operations across global data center market. For example, AI can help identify patterns and trends in data so that it can be more efficiently processed. Additionally, AI can be used to Excellerate machine learning algorithms and provide better insights into customer behavior.

Get sample copy of this report:

It should be noted that the deep learning algorithms used to power AI have provided great benefits in computer vision. Not only is the ability of the algorithm to reliably recognize objects far better than a human being, it is also efficient enough to provide lower end of the market price.

Automation: Many data center operations that don't use automated routines require manual intervention during peak hour. This can lead to error prone activities and additional efforts to track with these manual activities. Automation has enabled companies to scale resources more effectively as well as ensure seamless operations in both performance and quality. The "predictive" nature of automated systems will enable data centers to react faster when needed from startups.

Rapid Expansion of Big Data, Need for Advanced Infrastructure and Strong Impact of 5G to Bolster Data Center Market

The growth of cloud services and the massive increase in big data have made the data center an essential part of modern business. However, due to the increasing demand for these services, businesses are requiring more advanced infrastructure in order to keep up. One area where this is particularly evident is with regard to 5G wireless technology.

5G wireless technology is currently in its early stages, but it has the potential to make a significant impact on the way businesses operate. In particular, 5G wireless technology has the ability to transmit large amounts of data quickly and efficiently. This will allow businesses to conduct various types of transactions across a wide variety of platforms without having to rely on traditional network infrastructure.

As seen by the rapid expansion of cloud providers and big data, the future looks bright for the data center market. This is primarily due to the fact that 5G wireless technology has the potential to revolutionize the way businesses operate.

High Staff Turnover Due to Shortage of Skilled Professional

As the industry moves towards automation, it's becoming difficult to find employees with the required skills. According to a latest study, 88% of respondents believe that high staff turnover is due to a lack of skilled professional.

Automation is expected to play an ever-increasing role in data center operations, and as such, it's becoming more difficult to find employees with the requisite skills. This trend is likely to continue, especially as technology evolves and job functions become more specialized.

The high staff turnover rate in data center market is mainly due to the lack of skilled professional. According to the report "The State of Staffing in the Data center market" by staffing service provider Kelly Services, the turnover rate for tech services professionals is 47%. Even though this number may seem low, when considering that the average tenure for a tech services professional is only 2.9 years, it is clear that there is a lack of stability in this field. In order to combat this issue and bolster the skills of these professionals, it has become imperative for companies to focus on training and development policies.

While it may be costly to implement such policies, it will be worth it in the long run. The fact is that without skilled professionals, companies cannot maintain their competitive edge in the data center market. It is important for organizations to find a solution to this issue before it becomes too costly to fix.

Browse summary of the report and Complete Table of Contents (ToC):

Lack of Innovation in Colocation Facilities to Adversely Affect Data Center Market Growth

The data center market has seen a decline in innovation in latest years. Many facilities are using the same equipment, designs, and configurations for their colocation spaces. This stagnation may be impacting the industry as a whole. In order to keep up with the ever-changing technological landscape, data centers need to foster innovative practices and new designs. Here are some trends that may help reinvigorate the space:

1. The rise of cloud services: Cloud services have taken over as the dominant form of computing and data storage. This trend is likely to continue as more companies shift their IT operations to the cloud. Implementing cloud-based products and services within data center can help Excellerate efficiency and productivity.

2. The impact of virtualization: Virtualization has revolutionized how organizations work with data by allowing them to create multiple instances of software on dedicated servers. This capability allows you to run multiple applications on one machine while freeing up processing power and storage space. Virtualization can also provide added security benefits by isolating critical systems from possible cyberattacks.

3. The importance of big data: Increasingly, organizations are ingesting large amounts of data in order to identify patterns and insights that can be used for making informed business decision.

In the past few years, the data center market has seen an increasing trend of colocation facilities becoming outdated in terms of technology advancement and not meeting the needs of businesses. This has driven many businesses to look for innovative data center solutions that can help them outpace the competition.

One of the latest trends in data center market is the move towards containerized data centers. This is because it allows businesses to standardized their operations and manage their data effectively. In addition, these containers also offer businesses a number of other benefits such as more flexibility and lower costs.

So, How Companies are responding to Taping Potential of Data Center Market

The data center market is currently in a transitional period. It is facing challenges from Moore's Law being challenged by silicon manufacturing capacity and the commoditization of infrastructure services. The industry is also adjusting to new business models such as the public cloud, Big Data and mobile computing. It is important for datacenter operators to remain agile and stay ahead of the curve to ensure continued success.

Companies are now looking for automation and more transparency in the data center operations which will help them get more out of their data centers and reduce costs. However, with increased automation comes associated risks. Moreover, experts are of the opinion that computing in the data center will shift to mobile and cloud-based platforms in the near future; this will require intelligent management and efficient use of resources to maintain optimal performance while minimizing cost.

So, what does the future hold for the data center market? Expanding adoption of innovative technologies such as blockchain will help businesses stay ahead of the curve, while colocation facilities will need to adapt and Excellerate their offerings in order to remain competitive.

Speak to Analyst for your custom requirements:

Increased Internet Penetration and Rising Network of OTT Apps are Aligning Future of Data Center Market Towards Expansion

Rapid growth in usage of OTT platforms is expected to continue over the next few years. According to a study by Global Web Index, the global active user base of OTT platforms crossed 512 million in 2021, growing at a compound annual growth rate of 68%. The study forecasts that the global active user base of OTT platforms will reach 1.7 billion by 2025. This growth is being driven by expanding range of services offered by these platforms and increasing adoption among users.

OTT platforms are increasingly being used as an alternative to traditional TV content delivery. They are also being employed for social networking, gaming, education, and other purposes. In general, OTT platforms are changing the way people consume media content.

Two significant trends that are impacting the data center market are the increasing penetration of streaming platforms and improving user base. As streaming platforms such as Netflix, Amazon Prime Video, and Hulu become more popular, users are consuming more multimedia content. In turn, this is having a significant impact on the data center market by leading to increased demand for streaming services, more data storage requirements, and increased demand for connectivity infrastructure.

Data center market is witnessing an increase in the penetration of streaming platforms, especially with the advent of Amazon Web Services (AWS). This has led to an increase in the number of users and a corresponding rise in data throughput. In addition, data center operators are also benefiting from new technologies like artificial intelligence (AI) and machine learning that are helping them to optimize their infrastructure. The future holds many potential opportunities for data center operators who can tap into these trends.

Key Players in Data Center Market

Related Reports in SkyQuest’s Library:

Global Engineering Services Outsourcing Market

Global Micro Mobile Data Center Market

Global Software Defined Data Center Market

Data Center Colocation Market

Data Center Cooling Market

About Us:

SkyQuest Technology is leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.


1 Apache Way, Westford, Massachusetts 01886


USA (+1) 617-230-0741


LinkedIn Facebook Twitter

Mon, 18 Jul 2022 00:48:00 -0500 en-US text/html
C2140-138 exam dump and training guide direct download
Training Exams List