Dont Miss these C2030-284 pdf download

All of us now have legitimate and Approved Foundations of IBM Cloud Computing Architecture V4 questions and answers. gives the majority of species and the majority of recent C2030-284 Exam Cram which almost comprise all exam topics. With the particular database in our C2030-284 cheat sheets, there will be no need in order to risk your opportunity on reading research books and certainly need to burn off thru 10-20 hrs to ace our own C2030-284 questions and answers and answers.

Exam Code: C2030-284 Practice test 2022 by team
Foundations of IBM Cloud Computing Architecture V4
IBM Architecture information source
Killexams : IBM Architecture information source - BingNews Search results Killexams : IBM Architecture information source - BingNews Killexams : IT Modernization With a Sustainability Edge No result found, try new keyword!When designing an application modernization roadmap, add the green IT dimension to reduce carbon emissions and energy costs. Wed, 12 Oct 2022 13:07:00 -0500 en-US text/html Killexams : The Four Pillars To AI Innovation: How Businesses Can Jumpstart The AI Journey

By Anand Mahurkar, CEO, Findability.Sciences.

Rapid technological advances are changing how people do business—especially in post-pandemic times. Currently, the demand is for AI technology. PwC reported that "AI could contribute up to $15.7 trillion to the global economy in 2030" and will continue to be a game-changer by enabling organizations to increase productivity and consumption.

All businesses—be it healthcare, manufacturing, hospitality and even entertainment—are adopting AI to offer deep insight into their business processes and provide leading indicators that can help an organization prosper. To fully appreciate how AI is shaping the game, let’s look at some stats:

• In 2020, the AI in banking worldwide market was worth nearly $4 billion. By 2030, it's expected to be valued at over $64 billion.

• AI in healthcare was valued at $7.9 billion in 2021 and is predicted to grow to $201.3 billion by 2030.

Although these numbers are promising, it’s important to note that for a business to succeed in its AI journey, the organization needs to be ready for AI innovation. This means working on its IA—infrastructure architecture—before the real AI. Doing otherwise can lead the venture to fall to the wayside. According to Gartner analysts, "85% of AI and machine learning projects fail to deliver, and only 53% of projects make it from prototypes to production."

One major roadblock to a successful AI implementation is that although organizations usually have petabytes of data, the data is often unorganized, uncleaned and unanalyzed and sitting in a number of systems from ERP to CRM. Often, organizations simply don't have the right infrastructure or expertise to make sense of it.

For AI programs to work, data needs to be collected, cleaned and analyzed. However, the reality is that 82% of organization leaders say that data quality hinders their data integration projects and that they spend too much time on data cleaning, integration and preparation.

The goal for AI solution providers should be to help businesses become data-driven so that enterprises can utilize AI to the fullest in order to drive insights and predictions. AI solution providers can work with enterprises on these obstacles by keeping these four pillars in mind: creating a center of excellence and a collaborative AI team, prioritizing data modernization, embracing cloud transformation and leveraging partnerships.

1. Create a center of excellence.

Solutions providers should team up with a company’s internal employees, train or mentor the company’s representatives on the AI program and create their own center of excellence (CoE). Although the team might include data scientists or IT professionals, the CoE can also include marketing executives, end users, consumers and statisticians—all the minds required for collaboration. Consider the domain knowledge, understanding of customers and customer data, technology know-how and interpretation of data. The organization can work with its AI solution provider to create a roadmap as a guide on what benefits AI can provide in each area.

Each team member should be carefully selected based on the requirements and expertise a business needs. They should not only immerse themselves into the project but also maintain the company culture, as well as work according to the strategic goals.

2. Prioritize data modernization.

Organizations need information assets before AI. Creating a data architecture around an organization’s data assets is a critical next step. The newly formed team, along with the solutions provider, should handle this. They will have to determine which data should be collected and create an information architecture that can be utilized for AI purposes.

The first order of business should be how to collect data, which includes identifying data silos. It’s also necessary to utilize "wide data," or data coming from a variety of sources—internal and external as well as both structured and unstructured. "Big data" needs to be collected, too, or massive data coming at great speed.

The next step is to determine the processes and use cases the new data architecture can support. The team should keep in mind end-to-end migration services with automation that can take the company from planning to execution.

3. Embrace cloud transformation.

I highly suggest that businesses and customers today make the switch to cloud services. Many organizations are still utilizing legacy and on-premises technologies to store data. The cloud is now the better option when creating an AI framework. It lessens the bulk of hardware and also allows an organization to access the AI system from any device without further installations and processes. If data is still stored on physical servers, they just need to be migrated to secured cloud servers.

4. Leverage partnerships.

Although big organizations usually have licenses with IBM Cloud Pak or Snowflake, their stumbling block to a successful AI journey is that they don’t always know how to utilize these tools for AI implementation. The challenge is connecting the dots—utilizing third-party services for existing internal machines or data to create a prediction engine.

In addition, many popular warehousing or other big data technologies don't necessarily have AI plugins or prediction engines. The AI team should be tasked with the challenge of creating a system that uses the licenses or partnerships the company has for the solution they want to have. The AI solution provider must build that bridge that gets them to the finish line.

The reality is that the AI journey can be filled with potholes. There's a large amount of data sitting in an organization's system, ready for harnessing but often siloed by disparate teams. Other issues are that many companies are already invested in technology that can't be utilized for AI and, sometimes, the knowledge isn't there on how to leverage existing licenses and technologies for AI purposes.

The key to guiding companies in AI innovation is to let them visualize the possibilities and remind executive leaders that technology is advancing quickly, and AI is now typically considered a must-have asset for business sustainability. So, use these pillars and present the potential of an AI-driven enterprise.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Mon, 10 Oct 2022 02:15:00 -0500 Anand Mahurkar en text/html
Killexams : Platform Architecture Market is Supposed to Arrive at US$ 19,235.0 Million, with a CAGR of 14.1% from 2022 to 2032

The global platform architecture market is estimated to be valued at US$ 5,127.0 Million in 2022 and is expected to reach US$ 19,235.0 Million by 2032, with a CAGR of 14.1% from 2022 to 2032.

Several businesses use platform architecture to analyse system performance on a single screen. Platform architecture is a graphical environment that is used for data capture, simulation, and analysis.

Platform architecture enables system designers to optimise and investigate hardware-software partitioning and infrastructure configuration in order to achieve precise system performance at a low cost.

Request a trial to Obtain Authentic Analysis @

Platform architecture helps architects to build task driven work-load models for early architect analysis.

Enterprises are implementing platform architecture which help them to respond quickly to the changing market conditions and it also improves operational efficiency.

Platform Architecture Market: Drivers and Challenges

Cloud and internet of things (IoT) platform architectures are the major factors driving the platform architecture market. Internet of things platform architecture helps to set a base for building, managing and for securing fundamentals in the IoT. Moreover, Enterprises are implementing cloud platform architecture to provide both a runtime environment and development to cloud application through its various services such as platform as a service (PaaS).

Cloud platform architecture offers several solutions such as on-demand extension and on-premise extension. Hadoop architecture platform is one of the emerging and another factor driving the platform architecture market in positive manner .

Developing a scalable architecture for an application is one of the major challenge faced by the platform architecture market.

Platform Architecture Market: Regional Overview

Presently, North America region is holding the largest market share of platform architecture market owing to rapid implementation of Hadoop platforms architectures by most of the enterprises.

In Europe and APAC region, the market of platform architecture is growing in positive manner owing to adoption of cloud based platforms by small and large enterprises.

The research report presents a comprehensive assessment of the market and contains thoughtful insights, facts, historical data, and statistically supported and industry-validated market data.

It also contains projections using a suitable set of assumptions and methodologies. The research report provides analysis and information according to market segments such as geographies, application, and industry.

Ask Us Your Questions About This Report @

What is the Competition Landscape in the Platform Architecture Market?

Some of the leading companies operating in the global platform architecture market include SAP SE, Cisco, Google, Synopsys Inc., Microsoft Corp., Oracle, IBM Corp., Apprenda Inc., RNF technologies.

This is increasing competition in the platform architecture market as new entrants backed by venture capital firms enter the market with innovative designs and advanced solutions. Major players are focusing on strategic acquisitions to acquire significant platform architecture market shares.

Recent Developments in the Platform Architecture Market:

  • By partnering with Bergmann, a design and architecture service provider, Colliers, a professional services and investment management company, aimed to grow its geographic footprint. Architects frequently face obstacles in communicating their ideas and delivering complex designs.
  • In January 2021, Gensler announced its partnership with Forge Development Partners, as part of which Gensler became Forge Development Partners’ chief architect and interior designer. This helped Gensler enhance its business by catering to Forge Development Partners.
  • IBI Group, a global architectural services provider, announced in December 2021 it purchased Teranis Consulting Ltd., an environmental consulting firm. The acquisition will help IBI Group enhance its sustainability and environmental management services.

To remain ‘ahead’ of your competitors, Get Customized Report @

Key Segments Profiled in the Platform Architecture Market

By Deployment Type:

By Services Type:

  • Software-as-a-service (saas)
  • Platform-as-a-service (paas)

By Verticals:

  • IT and Telecom
  • Government
  • BFSI
  • Retail
  • Manufacturing
  • Others

By Region::

  • North America
  • Latin America
  • Western Europe
  • Eastern Europe
  • Asia-Pacific
  • Japan
  • Middle East and Africa

Related Links –

About Us

Future Market Insights (ESOMAR certified market research organization and a member of Greater New York Chamber of Commerce) provides in-depth insights into governing factors elevating the demand in the market. It discloses opportunities that will favor the market growth in various segments on the basis of Source, Application, Sales Channel and End Use over the next 10-years.


Future Market Insights Inc.
Christiana Corporate,
200 Continental Drive,
Suite 401, Newark,
Delaware – 19713, USA
T: +1-845-579-5705
For Sales Enquiries:
For Media Enquiries:
Browse latest Market Reports:

Thu, 13 Oct 2022 20:13:00 -0500 en-US text/html
Killexams : The rise of chief data officers and the increasing importance of data within enterprises By - Sameep Mehta

‘Data! Data! Data! I cannot make bricks without clay' is perhaps one of Holmes' most famous quotes, that has become even more relevant in today’s digital economy. For this, it illustrates the need for organizations to leverage data more effectively in order to drive innovation, optimize existing processes, and set up new revenue channels. Introducing the chief data officer (CDO), who is tasked with collating and decoding the multitudinous packets of information harvested by organizations hour after hour, then advising and constructing evidence-based strategies. This evolution clearly indicates that data can be exploited to drive profitable business, if fully utilized.

In the last decade, data has become increasingly recognized as a key enabler of business strategy, resulting in a significant rise in the number of chief data officers. Earlier, data ownership was a grey area of overlap between an organization's IT department and operations department - IT was typically responsible for integration and data functions, and operations for ensuring that integrations were smooth. Finally, companies have begun to introduce a new role that bridges the gap between IT and operations: the Chief Data Officer. In most industries across the globe, business leaders have seen budgets, priorities, processes, and more upended by the pandemic. As data and data management become increasingly important across business enterprises to succeed in a digital-first world, the CDO role also sees a rise.

Data is the centerpiece of Digital Transformation

To make the most of data, leaders must first comprehend its transformative potential and advocate for it within their organizations. At the heart of this digital transformation is the need for a solid data foundation. Organizations are failing to become data-driven, according to a NewVantage Partners survey from 2022. Although business leaders believe that data is transforming the knowledge economy, companies are regressing when it comes to becoming data-driven.

In today's hyper-connected world, only a fraction of the data is ingested, processed, queried, and analyzed in real time because legacy technology structures are limited, modern architectural elements present challenges, and real-time processing jobs require a high level of computational power. With the rapid advancements in technology, the increasing value of data, and the surge in data literacy, being "data-driven" is changing its meaning. For enterprises to be truly data-driven, data strategies must be linked to clear business outcomes, while data officers must make a commitment to building a holistic and strategic data-driven culture. The more mature the data infrastructure and governance are, the more trusted and democratized data will be across the organization.

Data Centric Enterprises – An idea whose time has come and it is NOW

Data is a gold mine that can change the world, Boost how we live, and develop a business, as well as make business operations easier, faster, and cheaper. It is more crucial than ever to be able to act on real-time insights based on data. We all witnessed during the pandemic how new data use cases emerged in the wake of the pandemic, and digital transformation was accelerated. For example, in order to make critical decisions, such as tracing contracts and reengaging workers, companies and governments created COVID-19 health dashboards containing data from many sources. Businesses shifted to providing data-driven insights to restore strength and resilience to supply chains post-pandemic.

A well-planned data strategy offers opportunities for business transformation, cost reduction, improved engagement, and maximum flexibility. For instance, businesses can utilize enterprise data to develop advanced AI-based innovations using Natural Language Processing (NLP), machine learning, deep learning, neural networks, speech-to-text, and text-to-speech.

As we progress into India’s techade, enterprises must adopt a holistic data strategy, led by the chief data officer, but with equal participation from data leaders, AI scientists, and data stewards. It should focus on optimizing the right business metrics, establishing the right data architecture, and improving data literacy within the organization. Data, analytics, and artificial intelligence capabilities need to be massively invested in as the business landscape moves into its new reality.

Building a modern data architecture for the data driven enterprise

A digital-first world requires businesses to accelerate innovation and agility through data architecture that supports digital innovation. Businesses need new data management tools and AI technologies, built on a data fabric architecture, to Boost decision making and innovate by getting value from a much broader corpus of data.

In the past, businesses invested heavily in relational databases to build data warehouses, which limited the use cases to traditional analytics. Despite the shift towards data lakes for data science purposes, data warehouses remained a cornerstone. With a data fabric architecture, businesses can continue to use the disparate data sources and storage repositories (databases, data lakes, data warehouses) that they've already invested in while simplifying data management.

Moving forward as a data-driven digital organization

Businesses today are not harnessing the true value of data as it is isolated in silos – from data centers to public and private clouds. It is now more important than ever for businesses to realize the urgency of becoming more agile and data-driven, and how it can make or break a business. As organizations adapt to the new world, they must invest in tools that will allow them to take advantage of their most valuable asset - data.
The author is IBM Distinguished Engineer and Lead – Data and AI Platforms, IBM Research India.

Disclaimer: The views expressed are solely of the author and does not necessarily subscribe to it. shall not be responsible for any damage caused to any person/organization directly or indirectly.

Mon, 03 Oct 2022 15:03:00 -0500 en text/html
Killexams : Axiado to Demo New Smart Secure Control Module Technology at OCP Global Summit No result found, try new keyword!Executive leadership from Axiado will be available at the OpenPOWER Foundation (OPF) booth #C32 at the OCP Global Summit in San Jose, Calif., October 18-20, to demo their Smart SCM security hardware ... Mon, 17 Oct 2022 02:36:00 -0500 en-US text/html Killexams : Is OTP a Viable Alternative to NIST's Post-Quantum Algorithms?

The quantum threat to RSA-based encryption is deemed to be so pressing that NIST is seeking a quantum safe alternative

The cracking of the SIKE encryption algorithm (deemed to be on its way to NIST standardization) on a single classical PC should make us evaluate our preconceptions on what is necessary for the post-quantum era. SecurityWeek has spoken to several cryptography experts to discuss the implications of the SIKE crack.

The issue

NIST, through the vehicle of a competition, is in the process of developing new cryptographic algorithms for the post quantum era. Shor’s algorithm has already shown that the existing RSA encryption, underlying modern internet communication, will be broken probably within the next decade.

IBM currently has quantum processors with 127 qubits. Mike Osborne, CTO of IBM Quantum Safe, added, “and a roadmap essentially, more or less, up to 4000 cubits [with] an idea how we get to a million cubits… the era of what we call cryptographically, relevant quantum machines is getting closer all the time.”

The threat to RSA-based communication has become known as the ‘harvest now, decrypt later’ problem. Adversarial nations can steal and copy currently encrypted data now, knowing that in a relatively few years’ time, they will be able to decrypt it.

Many secrets have a lifetime of decades – at the personal level, for example, social security numbers and family secrets; while at the nation level this can include state secrets, international policies, and the truth behind covert activity. The quantum threat to RSA is deemed to be so pressing that NIST is seeking a quantum safe alternative.

But the SIKE crack should remind us that the threat to encryption already exists – encryption, even post quantum encryption – can be defeated by classical computing.

Some cryptographic theory

The new algorithms being considered by NIST are designed to be ‘quantum safe’. This is not the same as ‘quantum secure’. ‘Safe’ means there is no known way to decrypt the algorithm. ‘Secure’ means that it can be mathematically or otherwise proven that the algorithm cannot be decrypted. Existing algorithms, and those in the current NIST competition, are thought to be quantum safe, not quantum secure.

As the SIKE crack shows us, any quantum safe encryption will be safe only until it is cracked.

There is only one quantum secure possibility – a one-time pad (OTP). A one-time pad is an encryption method that cannot be cracked. It requires a single-use (one-time) pre-shared key that is not smaller than the message being sent. The result is information-theoretically secure – that is, it provides perfect secrecy that is provably secure against mathematical decryption, whether by classical or quantum computers.

But there are difficulties – generating keys of that length with true randomness and delivering the key to the destination have so far proven impractical by electronic means. 

Scott Bledsoe, CEO at Theon Technology, summarized the current status: “The only encryption method guaranteeing survivorship even at the creation of the quantum computer is one-time pad encryption.” But he told SecurityWeek there is an issue with randomness and the uniformity of the distribution in the keys – any issue at this top level can allow you to predict all future keys.  

“Secondly,” he added, “the size of the key needs to be equal or larger than the message, and this requires more compute time and is slower than other classical algorithms.” The third problem is, “Key distribution and how the initial keys can be transmitted. This was handled in the past by person-to-person exchange, guaranteeing secrecy.”

This is the nub of the issue. NIST’s algorithms can only be ‘safe’. OTPs can be ‘secure’ but have been impractical to use. But the need for ‘secure’ rather than ‘safe’ is highlighted by the SIKE crack. Any algorithm can be considered safe until it is cracked, or until new methods of decryption suggest it is unsafe. During the time it is used before it is unsafe, it remains susceptible to harvest now, decrypt later.

This can happen at any time to any mathematical algorithm. The original RSA had a key length of 128 bits with a projected lifetime of millions of years before it could be cracked. As computers got better, the lifetime was progressively reduced requiring the key length to be increased. RSA now requires a key length in excess of 2,000 bits to be considered safe against classical computers, but cannot be secure against Shor’s quantum algorithm.

So, since no mathematical encryption can be proven secure, any communication using that algorithm can be decrypted if the algorithm can be broken – and SIKE demonstrates that it doesn’t always require quantum power to do so. So, at the very best, NIST’s quantum safe algorithms provide no ensure of long-lasting security.

“There are multiple research organizations and companies working on these problems,” says Bledsoe. “In the future we will see algorithms based on OTP concepts that have answers to the current shortcomings. They will leverage information theory and become viable options as an alternative to NIST-approved algorithms.”

The pros and cons of OTP

The NIST competition is solely focused on developing new encryption algorithms that should, theoretically, survive quantum decryption. In other words, it is an incremental advance on the current status quo. This will produce quantum safe encryption. But quantum safe is not the same as quantum secure; that is, encrypted communications will only remain encrypted until the encryption is broken.

History and mathematical theory suggest this will inevitably, eventually, happen. When that does happen, we will be back to the same situation as today, and all data harvested during the use of the broken algorithm will be decrypted by the adversary. Since there is an alternative approach – the one-time pad – that is secure against quantum decryption, we should consider why this approach isn’t also being pursued.

SecurityWeek spoke to senior advocates on both sides: NIST’s computer security mathematician Dustin Moody, and Qrypt’s cofounder and CTO Denis Mandich.

Moody accepts that one-time pads provide theoretically perfect security, but suggests their use has several drawbacks that make them impractical. “The one-time pad,” he said, “must be generated by a source of true randomness, and not a pseudo-random process.  This is not as trivial as it sounds at first glance.”

Mandich agrees with this, but comments, “[This is] why Qrypt uses quantum random number generators (QRNGs) licensed from the Oak Ridge National Laboratory and the Los Alamos National Laboratory.” These are quantum entropy sources that are the only known source of genuine randomness in science. (See Mitigating Threats to Encryption From Quantum and Bad Random for more information on QRNGs.)

Moody also suggests that OTP size is a problem. “The one-time pad must be as long as the message which is to be encrypted,” he said. “If you wish to encrypt a long message, the size of the one-time pad will be much larger than key sizes of the algorithms we [NIST) selected.”

Again, Mandich agrees, saying the trade-off for higher security is longer keys. “This is true for 100% of all crypto systems,” he says: “the smaller the keys, the less security is a general statement.” But he adds, “One of the other [NIST] finalists is ‘Classic McEliece’ which also has enormous key sizes but will likely be standardized. In many common use cases, like messaging and small files, McEliece keys will be much larger than OTPs.”

Moody’s next concern is authentication. “There is no way to provide authentication using one-time pads,” he said.

Here, Mandich simply disagrees. “Authentication can be provided for any type of data or endpoint.” He thinks the idea may stem from the NSA’s objection to QKD. The NSA has said, “QKD does not provide a means to authenticate the QKD transmission source.”

But Mandich adds, “A simple counter example is that the OTP of an arbitrary length may be hashed and sent in the clear between parties to authenticate that they have the same OTP. This could be appended to the encrypted data.”

“As the name implies,” said Moody, “one-time pads can only be used once. This makes them very impractical.”

But Mandich responds, “This is the trade-off to achieve higher security. Re-use of encryption keys means that breaking or getting access to the key facilitates decryption of all the previously encrypted data. OTPs are only used once, so if someone gets access to one OTP, it does not help in any other decryption.”

For Moody, the biggest problem for OTPs is the exchange of ‘keys’. “Probably the most major drawback,” he told SecurityWeek, “is that to use a one-time pad with another party, you must have securely exchanged the secret one time pad itself with the other party.”

He believes this distribution at scale is impossible and doesn’t work where the requirement is to communicate with another party that hasn’t been communicated with before. “You could send the one-time pad through the mail or via a courier, but not electronically,” he continued. “And if you could securely send the one-time pad, why didn’t you just send the message you wanted to share with the other party? Which makes the one-time pad not needed.” 

Mandich points out that the difficulty in key transfer and distribution at scale apply equally to all the public key encryption keys currently being considered by NIST. “There is nothing unique about OTPs other than size,” he said. “OTPs can be generated continuously and consumed when the messages are created at a later date. There is no reason to do it simultaneously unless it is a realtime communications channel.” He adds that combining keys for decryption with the encrypted data makes it easy to attack. “Decoupling these two mechanisms [as with OTPs] makes it almost impossible.”

Finally, comments Moody, “Modern cryptosystems overcome these obstacles and are very efficient.”

Mandich concedes this point but refers to the distinction between NIST’s quantum safe approach, and the OTP’s ability to be quantum secure. “Modern systems are very efficient and a one-size-fits-all solution – but at the cost of less security. Obstacles to using OTPs have long been overcome by the cloud, high bandwidth networks, and distributed and decentralized data centers. The PQC evolution from RSA is just changing an algorithm based on a 1970s pre-internet architecture, when Alice and Bob were connected by a single copper wire channel and a few network switches.”

Current examples

Some companies are already using OTP concepts in their technology. Two examples include startups Rixon and Qrypt. The first borrows OTP ideas to secure data, while the second can enable genuine OTP communication.


Rixon delivers a cloud-based vaultless tokenization system. Information received from a customer is immediately sent to the cloud and tokenized. What is returned to the client is data where each character has been randomly tokenized, and detokenization is under the control of the client’s customer; that is, the original end user.

No encryption algorithm nor encryption key is directly used in the tokenization, just a large set of random steps. The purpose is not to provide secure communications nor to provide a one-time pad. The purpose is to remove clear text data from a customer’s computers so that it cannot be stolen.

Nevertheless, the process borrows many of the concepts of the OTP. There is no algorithm that can be decrypted to provide widescale adversarial access to the data. Each character is independently tokenized, so that even if the tokenization process for that character is broken or discovered, it will only provide access to the single character.

The effect is that no two sets of customer data have the same ‘cryptographic’ process, making it similar to the OTP approach. 

“Everyone starts with a robust key management system, with key rotation, and key retirement being a keystone of every encryption management model,” Dave Johnson, CEO and cofounder of Rixon, told SecurityWeek. “After a time, all systems become looser in the sense that the processes and procedures become lax. Paperwork is easily adjusted to reflect compliance, but the reality is that key management systems become outdated and useless. Keys are stolen, compromised, and become known – organizations end up over time with an illusion of security.”

This will get worse in the quantum era. He continued, “With the advent of quantum processors – not that they’re really necessary to compromise encryption –with the implementation of these extremely fast processors the faults and the frailties of encryption will become blatantly apparent.”


Qrypt generates genuinely random numbers through a quantum process. This is the only known approach able to produce true randomness. The company has also developed a method able to provide the same random numbers simultaneously with both the sender and receiver. Both ends of the communication channel can use these numbers to generate the encryption keys without requiring the keys to be sent across the untrusted internet.

The initial purpose was primarily to provide true random numbers for any key generation, since poor or bad random numbers are the primary encryption attack vector. The second purpose was to eliminate the need to send keys across an untrusted network by having the same key independently built at both ends of the communications channel.

This process can be used to Boost the safety of both current classical algorithms and NIST’s PQC algorithms, or to facilitate a move toward the security of one-time pads – the same process can be harnessed as a one-time pad.

The future for encryption

There is no doubt that current encryption algorithms need to be replaced before the quantum era. NIST is focused on staying with the existing approach – by using more complex algorithms to counter more powerful computers. If one-time pads were still impractical (NIST believes that to be true), then this is the only valid way forward.

But startups are already demonstrating that the problems that have prevented electronic OTPs in the past are being circumvented by new cloud technology. This throws into stark relief that there is now a genuine choice between NIST’s quantum safe solutions, and OTP’s quantum secure solution.

Related: Senators Introduce Bipartisan Quantum Computing Cybersecurity Bill

Related: NIST Announces Post Quantum Encryption Competition Winners

Related: CISA Urges Critical Infrastructure to Prepare for Post-Quantum Cryptography

Related: QuSecure Launches Quantum-Resilient Encryption Platform

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.
Previous Columns by Kevin Townsend:
Tue, 04 Oct 2022 04:15:00 -0500 en text/html
Killexams : Quantinuum Is On A Roll – 17 Significant Quantum Computing Achievements In 12 Months

On the heels of two major quantum computing achievements last month, Tony Uttley, President and COO of Quantinuum, made three more announcements at IEEE Quantum Week 2022. The company’s latest announcements include another quantum volume record, a new method to make two-qubit gates with higher fidelity and greater efficiency, and a milestone achievement of more than a half-million downloads of Quantinuum’s open-source software development kit (SDK) called TKET.

Before analyzing the latest announcements, it's important to review Quantinuum’s quantum hardware and architecture that made those announcements possible.

Quantum Charged Coupled Device

There are two common ion traps, linear traps and quantum charged-coupled device (QCCD). The linear trap is exactly what it sounds like – a long chain of ion qubits contained in a single trapping zone. Linear traps have some shortcomings such as the limited ability to scale qubits in large numbers; it also has some limitations with qubit addressability. QCCD has its limitations as well, but nearly all major problems have been solved.

In July, Quantinuum solved a technical QCCD problem that has baffled the quantum research community for years when its scientists developed a method to allow ions to make 90-degree turns when moving through ion trap intersections. This is covered in more detail later.

QCCD was first proposed in a research paper by Dr. David Wineland and his NIST group over twenty years ago. However, Quantinuum was the first company to implement and Boost it. Dr. Chris Monroe, co-founder and Chief Scientist for IonQ, also Professor of Physics and ECE at Duke University, was one of the authors of that paper.

Rather than storing and performing qubit operations in a single linear trapping zone, QCCD uses multiple zones for the arbitrary rearrangement of qubits and accommodation of various codes, including those with exotic geometries. Small chains of ions in multiple small zones provides greater precision and control compared to large ion chains within a single trapping zone.

Of the two architectures, QCCD is considered the most advanced and most flexible. Quantinuum’s H-Series quantum computer currently has 20 qubits spread across 5 gating zones where qubits are parked, and quantum operations are performed. Ions can be moved from one zone to the next and then recombined. The architecture provides high-fidelity interactions between distant qubits and low crosstalk between gates. All of Quantinuum’s recent advancements have been made possible by QCCD’s high fidelity and flexibility.

Tony Uttley is confident that QCCD will support Quantinuum’s quantum roadmap of future generations of H-Series processors. He feels that QCCD offers a superior menu of technical advantages and provides greater adaptability not only for technical reasons, but for future market needs as well.

While QCCD qubit control is precise, qubit control in a single linear trapping zone with 50 or more qubits can be problematic. Packing many ions together in single traps can adversely affect spacing between ions which makes it difficult to address individual ions and create unwanted interactions between them. Fiber optics and optical switches may allow interconnection of multiple linear traps in the future, allowing for greater control and scaling. However, there are no optical switches available today that are fast enough to stitch a multi-chip trapped-ion architecture together.

Dr. Jungsang Kim is Co-founder and Chief Technology Officer of IonQ. He is also Professor of Physics in the Department of Electrical and Computer Engineering at Duke University. While working for Bell Labs in the early 2000s, Dr. Kim built the world's largest optical switch with over a thousand ports. Dr. Kim is currently working on an optical switch for IonQ's future architecture.

Quantinuum announcements at IEEE Quantum Week

  • Arbitrary angle entangling gate capabilities: Single qubit gates and fully entangling two-qubit gates are routinely used to build quantum circuit operations. Because there are a lot of algorithms that don’t need fully entangling two-qubit gates, Quantinuum developed a method using arbitrary angle partially entangling gate that increases efficiency and reduces errors. Lower errors allow more complex problems to be run.

Dr. Brian Neyenhuis is the Director of Commercial Operations at Quantinuum. When asked if the method was proprietary, Dr. Neyenhuis explained that the technique was not proprietary to Quantinuum.

“Other companies may be able to implement arbitrary angle entangling gates at some point in the future,” he said. “However, we have an advantage with QCCD because when we do a two qubit gate, it's only those two specific qubits in the interaction zone with laser beams. That makes it very straight forward. If there are too many qubits, such as in a linear trap, you have to worry about crosstalk which occurs when many qubits interacting together; and you also have to be careful about what the other qubits are doing. You can do those things with longer chains but it’s a lot harder.”

Dr. Neyenhuis also pointed out there are algorithms where the arbitrary angle two-qubit gate act as a natural building block. In general, the arbitrary angle gate can run on many quantum circuit types. He gave the quantum Fourier Transform as an example.

The Fourier Transform has been called one of the most useful mathematical tools in modern science and engineering. Dr. Neyenhuis explained that use of arbitrary angle two-qubit gates in the quantum Fourier Transform can reduce the number of two-qubit gates needed for the transform by 2x, plus it can reduce overall errors by 2x as well. Greater circuit fidelity offers the advantage of running deeper and more complex circuits. More information about arbitrary angle entangling gates can be found here

  • Record quantum volume 8192 attained by using arbitrary angle entangling gates: Setting Quantum Volume (QV) records is nothing new for Quantinuum (or previously as Honeywell Quantum Solutions). However, there is something new about this record - Quantinuum used its new arbitrary angle partially entangling gate to help achieve it’s latest record quantum volume. The new QV record of 8192 (213) is double Quantinuum’s previous volume record of 4096 set only five months ago. In fact, it is the seventh time in two years that Quantinuum’s H-Series system has set the record for a measured QV. Quantinuum’s goal is to increase quantum volume by 10X annually.

IBM originally developed quantum volume in 2017 as a hardware-agnostic performance measurement for gate-based quantum computers such as the Quantinuum H-Series system. QV testing measures many aspects of a quantum computer. Although the number of qubits is important, there are also other system factors that affect a quantum computer’s performance such as qubit connectivity, gate fidelity, cross talk, circuit compiler efficiency, and more. A high quantum volume is an indicator of a quantum computer’s power.

QV score is determined by running specified algorithms and arbitrary circuits. For this QV record, Quantinuum ran 220 quantum volume circuits 90 times each.

Quantinuum scientists found that arbitrary angle two-qubit gates performed more efficiently and with less errors at each step of the algorithm. The cumulative effect of the arbitrary angle gates helped boost quantum volume to its current record value of 8192.

Quantinuum plans to continue the use of quantum volume until a better metric is created and endorsed by the ecosystem. More complete information about the new quantum volume record can be found here.

  • A big number for TKET downloads

Tony Uttley also announced that Quantinuum achieved a milestone of surpassing 500,000 downloads of TKET.

TKET is Quantinuum’s open source SDK used by developers writing quantum algorithms for gate-based quantum computers. It is universally accessible through the PyTKET Python package. Functionally, it optimizes quantum algorithms by reducing computational resources. The TKET SDK also integrates with Qiskit, Cirq and Q#.

Since the software is downloaded both by companies and academic institutions with multiple users, the user count is likely greater than 500,000. Quantinuum estimates that the TKET base is growing globally and now has close to a million users.

Quantinuum plans to continually evolve the TKET platform as updates and advancements occur to ensure it includes new hardware capabilities such as Quantinuum’s most recent development, arbitrary angle two-qubit gates.

Analyst comments

Over the past 12 months, Quantinuum scientists have performed a great deal of research. However, there are two previous pieces of standout research that should be highlighted:

  1. Highlighted research #1 - Shuttling ion pairs through intersections and negotiating 90 degree turns: In July, Quantinuum researchers discovered how to move two ions of different species - ytterbium and barium - simultaneously through an intersection of a microfabricated prototype trap with a grid-like structure. The research demonstrated that an ion pair could turn 90 degree corners with speed but without excessive motion.

It may not sound significant, but this research provides the capability to execute Quantinuum’s long term roadmap for future generations of H-series quantum computers. Quantinuum is following the pre-merger hardware strategy originally developed by Honeywell Quantum Solutions. That plan calls for the System Model H-2 to use a racetrack-like design shown in the above graphic. H-Series System Model H-3, H-4, and H-5 will use two-dimensional traps that resemble a city street grid with multiple railroad lines and intersections.

Limitations inherent in the QCCD grid design are what motivated Quantinuum scientists to pursue research needed to move ion pairs through grid intersections together and make sharp corner turns without excessive energy and motion.

Ion trap researchers have been working on this problem for years. Prior to Quantinuum’s research, it was believed that the only way for paired ions to move through zones was to first separate the pair, then move them through junctions one at a time. That solution would have significantly increased processing time.

2. Highlighted research #2 - Closing the gap on fault-tolerant quantum error correction: Fault tolerant quantum error correction will make it possible to build quantum computers with enough qubits to solve problems far beyond the computational reach of today’s largest and most powerful supercomputers.

Qubits are very sensitive to sources of noise in their environment which can result in random errors during quantum computation. Uncorrected errors can accumulate to the point that viable computation isn’t possible. It is currently not possible to build quantum computers with millions of qubits due to the lack of quantum error correction (QEC).

QEC is both a physics and engineering problem. In the quantum ecosystem, nearly every academic and commercial institution is performing some level of QEC research. The entire ecosystem has invested years of research into QEC and yet a complete fault tolerant QEC solution has yet to be developed, which illustrates its complexity and difficulty. Even so, much progress has been made.

A good example is the research paper Quantinuum published in August that illustrates two important error correction firsts.

These “firsts” were made possible by using physical qubits to form logical qubits. Each logical qubit is formed from groups of entangled physical qubits that perform computations while other qubits are tasked with error detection and correction.

Several years ago, it was thought that 1000 physical qubits would be needed for each logical qubit. Now that ratio is down to 10 to 1 or less.

For the first time, Quantinuum researchers were able to construct a logical entangling circuit that had a higher fidelity than its physical counterpart. The researchers also accomplished another QEC first by entangling two logical qubit gates in a fully fault-tolerant manner using real-time QEC.

Key to this demonstration is its repeatability, a necessity for any QEC solution. While the research does not provide a complete QEC solution, it is stil an important proof of concept that creates a new starting point for other researchers to build on.

More information on Quantinuum error correction research can be found here.

3. Quantinuum research performed over the past 12 months:

September 27, 2022

Quantinuum Sets New Record with Highest Ever Quantum Volume of 8192

August 4, 2022

Logical qubits start outperforming physical qubits

July 11, 2022

Quantum Milestone: Turning a Corner with Trapped Ions

June 14, 2022

Quantinuum Completes Hardware Upgrade; Achieves 20 Fully Connected Qubits

May 24, 2022

Quantinuum Introduces InQuantoTM to Explore Industrially Relevant Chemistry Problems on Today’s Quantum Computers

April 14, 2022

Quantinuum Announces Record Quantum Volume of 4096

March 29, 2022

On the ArXiv: Modeling Carbon Capture with Quantum Computing

March 29, 2022

Quantinuum Announces Updates to Quantum Natural Language Processing Toolkit λambeq, Enhancing Accessibility

March 3, 2022 Quantinuum announces a world record in fidelity for quantum computing qubits

December 29, 2021 Demonstrating Benefits of Quantum Upgradable Design Strategy: System Model H1-2 First to Prove 2,048 Quantum Volume

December 7, 2021

Introducing Quantum Origin, The world’s first quantum-enhanced cryptographic key generation platform to protect data from cybersecurity threats

(Note: On November 30, 2021, Quantinuum was formed by the merger beteen Honeywell Quantum Solutions and Cambridge Quantum)

November 27, 2021

Quantum Milestone: We Can Now Detect and Correct Quantum Errors in Real Time

November 29, 2021

LAMBEQ: A Toolkit for Quantum Natural Language Processing

November 29, 2021

How a New Quantum Algorithm Could Help Solve Real-world Problems Sooner

November 29, 2021

Quantum Milestone: 16-Fold Increase in Performance in a Year

October 15, 2021

Researchers ‘Hide’ Ions to Reduce Quantum Errors By Reducing Crosstalk Errors An Order of Magnitude

October 20, 2021

TKET: Quantum Software Tool Goes Open Source

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex,, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movand

Thu, 06 Oct 2022 04:47:00 -0500 Paul Smith-Goodson en text/html
Killexams : Red Hat Names Carolyn Nash as Senior Vice President and Chief Operating Officer

Open source leader also selects new senior vice president and chief financial officer and vice president and chief information officer.

Red Hat, Inc., the world's leading provider of open source solutions, today announced that Carolyn Nash has been named the company's senior vice president and chief operating officer, effective immediately. As part of this move, Red Hat is building out the Finance and Operations organization and has named Robert Leibrock senior vice president and chief financial officer and Jim Palermo as vice president and chief information officer. Nash will continue reporting to Red Hat's president and chief executive officer, Matt Hicks. Leibrock and Palermo will report directly to Nash.

This press release features multimedia. View the full release here:

Carolyn Nash, senior vice president and chief operating officer, Red Hat (Photo: Business Wire)

Nash most recently served as Red Hat's senior vice president and chief financial officer and was responsible for leading the company's global finance organization. Before assuming the CFO role in early 2022, Nash was vice president of Finance, overseeing the Global Finance Transformation and Operations (GTO) organization. She has played an integral part in strengthening and growing the company's finance operation. Before Red Hat, she served in leadership positions at Cisco, Hewlett Packard and KPMG in finance and operational roles.

Leibrock brings 20 years of experience in both the financial and operational space to Red Hat. He has spent much of his career at IBM, most recently serving as assistant controller, and was responsible for enterprise-wide financial management, including forecasts, measurements and IBM's operational management system. He also played a key role in IBM's $34 billion acquisition of Red Hat in 2019, responsible for the overall project office, finance and operations functions and driving offering synergies.

Palermo has nearly 30 years of experience in information technology spanning technical and leadership roles. He joined Red Hat in 2010 and most recently he served as vice president of Digital Solutions Delivery (DSD) where he was responsible for developing and driving the environment, tools and delivery for hosting internal workloads both in the hybrid cloud and in Red Hat's next-generation data centers.

Supporting Quotes
Matt Hicks, president and chief executive officer, Red Hat
"As Red Hat evolves to meet our customers wherever and however they operate across the open hybrid cloud, we need every aspect of our business, from engineering and product development to corporate functions like IT and finance, to perform at the highest possible level. Carolyn's proven track record shows that she is the right leader to oversee the expanded Finance and Operations organization, backed by the expertise and experience of Bobby and Jim. Together, I'm confident that these leaders can help accelerate Red Hat's mission to help our customers take advantage of open source innovation while helping us more readily adapt to dynamic market conditions."

Carolyn Nash, senior vice president and chief operating officer, Red Hat
"I am so grateful to be a part of Red Hat and have the ability to work alongside our incredibly talented and passionate associates every day. The Finance and Operations functions are the engine that fuels our growth and make it possible for all Red Hatters to be successful in their jobs. As Red Hat works toward its mission of being the defining company of the hybrid cloud era, our corporate functions need to be in lockstep. I am excited to work with Bobby, Jim and the rest of the organization as we enable customer success and support Red Hat into the future."

Robert Leibrock, senior vice president and chief financial officer, Red Hat
"I have been fortunate to work very closely with Red Hat leadership over the last few years so joining the team feels like a natural progression. Red Hat is the driver in the hybrid cloud industry and there is immense opportunity ahead. As CFO, I'm excited to make an impact on Red Hat's next chapter of success and honored to lead a highly talented group of associates. As more organizations look to open source and Red Hat to help them innovate, we are the right strategic partner to help them modernize their IT infrastructure and applications."

Jim Palermo, vice president and chief information officer, Red Hat
"Customer requirements must be the driver behind IT's future, especially as organizations transition to complex, multi-layered environments that use open hybrid cloud and automation technologies. IT's role within the Finance and Operations teams is not only to support broad business operations but also to serve as a reference architecture for Red Hat solutions in production. I am proud to take on this leadership role and help drive alignment within Red Hat's internal systems and environments as we work together to make hybrid cloud the default language for global IT."

Connect with Red Hat

About Red Hat, Inc.
Red Hat is the world's leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Forward-Looking Statements
Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the company's current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause real results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat and the Red Hat logo are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

© 2022 Benzinga does not provide investment advice. All rights reserved.

Wed, 12 Oct 2022 01:01:00 -0500 text/html
Killexams : Encoding Images Against Use in Deepfake and Image Synthesis Systems

The most well-known line of inquiry in the growing anti-deepfake research sector involves systems that can recognize artifacts or other supposedly distinguishing characteristics of deepfaked, synthesized, or otherwise falsified or ‘edited’ faces in video and image content.

Such approaches use a variety of tactics, including depth detection, video regularity disruption, variations in monitor illumination (in potentially deepfaked live video calls), biometric traits, outer face regions, and even the hidden powers of the human subconscious system.

What these, and similar methods have in common is that by the time they are deployed, the central mechanisms they’re fighting have already been successfully trained on thousands, or hundreds of thousands of images scraped from the web – images from which autoencoder systems can easily derive key features, and create models that can accurately impose a false identity into video footage or synthesized images – even in real time.

In short, by the time such systems are active, the horse has already bolted.

Images That Are Hostile to Deepfake/Synthesis Architectures

By way of a more preventative attitude to the threat of deepfakes and image synthesis, a less well-known strand of research in this sector involves the possibilities inherent in making all those source photos unfriendly towards AI image synthesis systems, usually in imperceptible, or barely perceptible ways.

Examples include FakeTagger, a 2021 proposal from various institutions in the US and Asia, which encodes messages into images; these encodings are resistant to the process of generalization, and can subsequently be recovered even after the images have been scraped from the web and trained into a Generative Adversarial Network (GAN) of the type most famously embodied by, and its numerous derivatives.

FakeTagger encodes information that can survive the process of generalization when training a GAN, making it possible to know if a particular image contributed to the system's generative capabilities. Source:

FakeTagger encodes information that can survive the process of generalization when training a GAN, making it possible to know if a particular image contributed to the system’s generative capabilities. Source:

For ICCV 2021, another international effort likewise instituted artificial fingerprints for generative models, (see image below) which again produces recoverable ‘fingerprints’ from the output of an image synthesis GAN such as StyleGAN2.

Even under a variety of extreme manipulations, cropping, and face-swapping, the fingerprints passed through ProGAN remain recoverable. Source:

Even under a variety of extreme manipulations, cropping, and face-swapping, the fingerprints passed through ProGAN remain recoverable. Source:

Other iterations of this concept include a 2018 project from IBM and a digital watermarking scheme in the same year, from Japan.

More innovatively, a 2021 initiative from the Nanjing University of Aeronautics and Astronautics sought to ‘encrypt’ training images in such a way that they would train effectively only on authorized systems, but would fail catastrophically if used as source data in a generic image synthesis training pipeline.

Effectively all these methods fall under the category of steganography, but in all cases the unique identifying information in the images needs to be encoded as such an essential ‘feature’ of an image that there is no chance that an autoencoder or GAN architecture would discard such fingerprints as ‘noise’ or outlier and inessential data, but rather will encode it along with other facial features.

At the same time, the process cannot be allowed to distort or otherwise visually affect the image so much that it is perceived by casual viewers to have defects or to be of low quality.


Now, a new German research effort (from the Technical University of Munich and Sony Europe RDC Stuttgart) has proposed an image-encoding technique whereby deepfake models or StyleGAN-type frameworks that are trained on processed images will produce unusable blue or white output, respectively.

TAFIM's low-level image perturbations address several possible types of face distortion/substitution, forcing models trained on the images to produce distorted output, and is reported by the authors to be applicable even in real-time scenarios such as DeepFaceLive's real-time deepfake streaming. Source:

TAFIM’s low-level image perturbations address several possible types of face distortion/substitution, forcing models trained on the images to produce distorted output, and is reported by the authors to be applicable even in real-time scenarios such as DeepFaceLive’s real-time deepfake streaming. Source:

The paper, titled TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations, uses a neural network to encode barely-perceptible perturbations into images. After the images are trained and generalized into a synthesis architecture, the resulting model will produce discolored output for the input identity if used in either style mixing or straightforward face-swapping.

Re-Encoding the Web..?

However, in this case, we’re not here to examine the minutiae and architecture of the latest version of this popular concept, but rather to consider the practicality of the whole idea – particularly in light of the growing controversy about the use of publicly-scraped images to power image synthesis frameworks such as Stable Diffusion, and the subsequent downstream legal implications of deriving commercial software from content that may (at least in some jurisdictions) eventually prove to have legal protection against ingestion into AI synthesis architectures.

Proactive, encoding-based approaches of the kind described above come at no small cost. At the very least, they would involve instituting new and extended compression routines into standard web-based processing libraries such as ImageMagick, which power a large number of upload processes, including many social media upload interfaces, tasked with converting over-sized original user images into optimized versions that are more suitable for lightweight sharing and network distribution, and also for effecting transformations such as crops, and other augmentations.

The primary question that this raises is: would such a scheme be implemented ‘going forward’, or would some wider and retroactive deployment be intended, that addresses historical media that may have been available, ‘uncorrupted’, for decades?

Platforms such as Netflix are not averse to the expense of re-encoding a back catalogue with new codecs that may be more efficient, or could otherwise provide user or provider benefits; likewise, YouTube’s conversion of its historic content to the H.264 codec, apparently to accommodate Apple TV, a logistically monumental task, was not considered prohibitively difficult, despite the scale.

Ironically, even if large portions of media content on the internet were to become subject to re-encoding into a format that resists training, the limited cadre of influential computer vision datasets would remain unaffected. However, presumably, systems that use them as upstream data would begin to diminish in quality of output, as watermarked content would interfere with the architectures’ transformative processes.

Political Conflict

In political terms, there is an apparent tension between the determination of governments not to fall behind in AI development, and to make concessions to public concern about the ad hoc use of openly available audio, video and image content on the internet as an abundant resource for transformative AI systems.

Officially, western governments are inclined to leniency in regards to the ability of the computer vision research sector to make use of publicly available media, not least because some of the more autocratic Asian countries have far greater leeway to shape their development workflows in a way that benefits their own research efforts – just one of the factors that suggests China is becoming the global leader in AI.

In April of 2022, the US Appeals Court affirmed that public-facing web data is fair game for research purposes, despite the ongoing protests of LinkedIn, which wishes its user profiles to be protected from such processes.

If AI-resistant imagery is therefore not to become a system-wide standard, there is nothing to prevent some of the major sources of training data from implementing such systems, so that their own output becomes unproductive in the latent space.

The essential factor in such company-specific deployments is that images should be innately resistant to training. Blockchain-based provenance techniques, and movements such as the Content Authenticity Initiative, are more concerned with proving that image have been faked or ‘styleGANned’, rather than preventing the mechanisms that make such transformations possible.

Casual Inspection

While proposals have been put forward to use blockchain methods to authenticate the true provenance and appearance of a source image that may have been later ingested into a training dataset, this does not in itself prevent the training of images, or provide any way to prove, from the output of such systems, that the images were included in the training dataset.

In a watermarking approach to excluding images from training, it would be important not to rely on the source images of an influential dataset being publicly available for inspection. In response to artists’ outcries about Stable Diffusion’s liberal ingestion of their work, the website allows users to upload images and check if they are likely to have been included in the LAION5B dataset that powers Stable Diffusion:

'Lenna', literally the poster girl for computer vision research until recently, is certainly a contributor to Stable Diffusion. Source:

‘Lenna’, literally the poster girl for computer vision research until recently, is certainly a contributor to Stable Diffusion. Source:

However, nearly all traditional deepfake datasets, for instance, are casually drawn from extracted video and images on the internet, into non-public databases where only some kind of neurally-resistant watermarking could possibly expose the use of specific images to create the derived images and video.

Further, Stable Diffusion users are beginning to add content – either through fine-tuning (continuing the training of the official model checkpoint with additional image/text pairs) or Textual Inversion, which adds one specific element or person – that will not appear in any search through LAION’s billions of images.

Embedding Watermarks at Source

An even more extreme potential application of source image watermarking is to include obscured and non-obvious information into the raw capture output, video or images, of commercial cameras. Though the concept was experimented with and even implemented with some vigor in the early 2000s, as a response to the emerging ‘threat’ of multimedia piracy, the principle is technically applicable also for the purpose of making media content resistant or repellant to machine learning training systems.

One implementation, mooted in a patent application from the late 1990s, proposed using Discrete Cosine Transforms to embed steganographic ‘sub images’ into video and still images, suggesting that the routine could be  ‘incorporated as a built-in feature for digital recording devices, such as still and video cameras’.

In a patent application from the late 1990s, Lenna is imbued with occult watermarks that can be recovered as necessary . Source:

In a patent application from the late 1990s, Lenna is imbued with occult watermarks that can be recovered as necessary. Source:

A less sophisticated approach is to impose clearly visible watermarks onto images at device-level – a feature that’s unappealing to most users, and redundant in the case of artists and professional media practitioners, who are able to protect the source data and add such branding or prohibitions as they deem fit (not least, stock image companies).

Though at least one camera currently allows for optional logo-based watermark imposition that could signal unauthorized use in a derived AI model, logo removal via AI is becoming quite trivial, and even casually commercialized.

First published 25th September 2022.

Sat, 24 Sep 2022 21:27:00 -0500 en-US text/html
Killexams : The global multichannel order management market size is expected to grow at a Compound Annual Growth Rate (CAGR) of 9.4%


during the forecast period, to reach USD 4. 2 billion by 2027 from USD 2. 7 billion in 2022. Marketplaces, social media platforms, and online websites have all seen an increase in popularity as a result of expanding digitization activities in multichannel order management, it is anticipated that such developments would help the multichannel order management market to grow rapidly.

New York, Oct. 07, 2022 (GLOBE NEWSWIRE) -- announces the release of the report "Multichannel Order Management Market by Component, Deployment Mode, Application, Organization Size, Vertical and Region - Global Forecast to 2027" -

Due to an unforeseen increase in demand from numerous platforms and changing organizational structures, multichannel order management has proven difficult for businesses and their customers. In order to manage orders originating from various channels, cutting-edge multichannel order management software and services are being deployed.

The major market players such as include IBM, SAP, HCL Technologies, Oracle, and Salesforce, have adopted numerous growth strategies, which include acquisitions, new product launches, product enhancements, and business expansions, to enhance their market shares.

Based on deployment mode, cloud deployment mode to register for the largest market size during the forecast period
The cloud-based multichannel order management deployment mode is an economical and effective approach for enterprises to handle large data concerns.Organizations can lower their infrastructure costs due to the pay-per-use pricing structure of cloud solutions.

Due to the fact that no data must be stored on-premises, both the original investment and ongoing maintenance costs for these solutions are drastically reduced. The cloud deployment mode is anticipated to register the largest market size and is projected to grow from USD 1,182 million to USD 1,752 million during the forecast period.

The component segment to account for the highest CAGR during the forecast period
The amount of data that needs to be analyzed is increasing daily as a result of the rise in the number of data-generating sources.Services that form an integral part of the multichannel order management architecture includes product maintenance, training, and consultation.

Vendors can develop and Boost the inventory process with the help of multichannel order management services.It is further categorized into managed and professional services.

In the multichannel order management market, the services sector as a whole has a significant impact.These services help with cost-cutting, revenue growth, and staff performance enhancement.

The Service segment is expected to register the highest CAGR of 11.4% during the forecast period.

Asia Pacific to hold the highest CAGR during the forecast period
For businesses that provide multichannel Order Management solutions, Asia Pacific has provided attractive market potential.During the forecast period, it is anticipated to become the region with the fastest rising demand for multichannel order management solutions.

As businesses in this region quickly implement multichannel order management solutions to satisfy client demand, the market in Asia Pacific is anticipated to grow strongly.

Breakdown of primaries
In-depth interviews were conducted with Chief Executive Officers (CEOs), innovation and technology directors, system integrators, and executives from various key organizations operating in the multichannel order management market.
• By Company: Tier I: 34%, Tier II: 43%, and Tier III: 23%
• By Designation: C-Level Executives: 50%, D-Level Executives: 30%, and Managers: 20%
• By Region: APAC: 30%, Europe: 30%, North America: 25%, MEA: 10%, Latin America: 5%
The report includes the study of key players offering multichannel order management.It profiles major vendors in the multichannel order management market.

The major players in the multichannel order management market include IBM (US), Oracle (US), SAP (Germany), Salesforce (US), HCL Technologies (India), Zoho (India), Brightpearl (US), Square (US), Selro (England), Linnworks (England), Vinculum (India), Freestyle Solutions (US), Aptean (US), Etail Solutions (US), SellerActive (US), Delhivery (India), Cloud Commerce Pro (England), QuickBooks Commerce (India), Unicommerce (India), SalesWarp (US), Contalog (India), Browntape (India), Appian(US).

Research Coverage
The market study covers the multichannel order management market across segments.It aims at estimating the market size and the growth potential of this market across different segments, such as component, application, organization size, deployment mode, vertical, and region.

It includes an in-depth competitive analysis of the key players in the market, along with their company profiles, key observations related to product and business offerings, recent developments, and key market strategies.

Key Benefits of Buying the Report
The report would provide the market leaders/new entrants in this market with information on the closest approximations of the revenue numbers for the overall multichannel order management market and its subsegments.It would help stakeholders understand the competitive landscape and gain more insights better to position their business and plan suitable go-to-market strategies.

It also helps stakeholders understand the pulse of the market and provides them with information on key market drivers, restraints, challenges, and opportunities.
Read the full report:

About Reportlinker
ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.


CONTACT: Clare: US: (339)-368-6001 Intl: +1 339-368-6001
Fri, 07 Oct 2022 02:13:00 -0500 en-NZ text/html
C2030-284 exam dump and training guide direct download
Training Exams List