By
If you're a visual artist of any kind, it's a good bet that you've used one or more of the software tools in the Adobe Creative Cloud. If you're a working artist, you might even use them every day. Many creators take these platforms for granted, but Adobe has evolved its software by leaps and bounds with each new update, to the point that even everyday users might not be seeing their full potential.
The 2023 All-in-One Adobe Creative Cloud Suite Certification Course Bundle is a great way for new users to get up to speed - and for established creatives to take their work to the next level.
What you get in the bundle is 10 courses that focus on Adobe's most powerful creative tools. As you go through each one, you'll get time-saving tricks and technical advice from an all-star team of certified Adobe instructors and established artists, entrepreneurs, and videographers.
Beginner walkthroughs on Photoshop and Illustrator will show you how to turn your laptop into a high-powered studio, and budding game designers can learn to use Adobe Animate to bring their characters to life. Photographers will find career-building tips in Lightroom and After Effects, and web designers can work smarter with a crash course on Adobe XD.
For videographers, there are no less than four courses on Adobe Premiere, including an intensive course that will show you how to edit videos directly on your phone with Premiere Rush. Best of all, completion of many of these courses comes with a certification of completion that can help you turn your art into a career.
The full Adobe Creative Cloud Suite Certification Course Bundle includes 73 hours of instruction in all, and BleepingComputer readers can get access to it all for $29.99, more than 80% off the total MSRP for all courses.
Prices subject to change.
Disclosure: This is a StackCommerce deal in partnership with BleepingComputer.com. In order to participate in this deal or giveaway you are required to register an account in our StackCommerce store. To learn more about how StackCommerce handles your registration information please see the StackCommerce Privacy Policy. Furthermore, BleepingComputer.com earns a commission for every sale made through StackCommerce.
Game engines such as Unity and Unreal Engine 5 are becoming more popular to use than traditional rendering software, so why is the art world turning to real time apps? Below we ask three working artists why they use game engines and why they are worth learning.
To recap, it's worth understanding the context of a rise in game engines amongst artists. The prospect of the metaverse (remember that?) and the rise of a '3D internet' means 3D skills are in demand; a accurate survey by Adobe revealed graphic design studios are now more interested in artists with 3D skills. The advent of Apple's Vision Pro and a resurgence in VR and AR technology has also meant 3D skills, and in particular Unity knowledge is on the rise.
Our feature 'Why you need to learn Unreal Engine and Unity' goes into detail on the usefulness of game engines, while a accurate sit down with developer Hexworks for its game Lords of the Fallen revealed how Unreal Engine 5 has changed the way the studio approaches environment design.
Of course, a video game developer is used to using real time technology, but advances in the platforms has meant they're becoming more useful to arch-viz artists, app developers, web designers and filmmakers - read my interview with director Tim Richardson on how Unreal Engine is being embraced by the film and fashion industries.
Senior 3D artist Sergey Kuydin says: "Due to the fact that new technologies are emerging, and the performance of devices is growing, we are moving to a new stage in the production of computer graphics. Thanks to high-quality graphics out of the box in modern versions of game engines, even small teams can make a quality product, and the originality and relevance of the idea can come to the fore.
"The trend is moving towards more and more automation of the process, and towards improving the quality of the product at the final stage. Neural networks will actively help process large data sets and Boost graphics variability, as well as optimise labour costs, which will have a good effect on the budget and quality of the final product."
Level designer James Arkwright says: "I think there’s a growing trend of artists moving away from the traditional look and feel of real-time art towards styles and levels of fidelity that have previously been reserved for offline rendering.
"So artists are now aiming higher from the outset rather than settling for what game or real-time art traditionally has looked like. Real-time lighting can now easily be made to look soft, well-balanced and photographic, and the level of detail in geometry and materials is growing substantially as well.
"The technical barrier of entry continues to get lower, so the systems aren’t just visually superior but also easier for less-technical artists. A look that was impossible in Unreal Engine 3 or 4 is now almost default, so I think artists will increasingly look towards film and photography as the basis for their art, as these mediums are very visually refined and the lines between games and film continue to blur.
"The gap is closing rapidly, and I think it’s conceivable that in another generation or two of game engine development the pendulum may have swung completely towards real-time rendering, without much of a space for offline rendering except in extremely taxing scenarios."
Video game artist Jeryce Dianingana says: "One of the biggest opportunities is how real-time engines facilitate exchanges via the marketplace. Buyers can use packages that are ready for use in just a few clicks for any purpose including movies, games and musical videos, among many others. Many studios in the past and present can rely on what the marketplace can offer to save time and money, which allows artists to have their 3D assets, shaders, or scripts brought into that process.
"Another huge opportunity is the possibility of massively reducing the time spent creating incredible art. Time is one of the most expensive aspects of the development of a project and real-time engines help artists create and iterate much faster due to the fact they don’t have to wait for every frame to be rendered offline. Additionally, exchanging with a client and having the flexibility to address feedback during a meeting is a huge gain of time for both parties.
"If we also take the example of a movie where artists can use a virtual reality camera to quickly set different shots, along with the ability to change any settings of the background, it’s something that can eventually save weeks of work, or perhaps even months depending on the size of the production at hand."
So, game engines are on the rise. We have some more info to read to get up to speed on getting started in real time platforms. I'd suggest memorizing our Unreal Engine 5 review, as well as our Unity versus Unreal Engine comparison feature.
If you're serious about learning one of the two most popular platforms - Unity and Unreal Engine - then I'd suggest heading to the sites for free training and downloads, you can find advice videos and assets at Unity's website and the official Unreal Engine blog.
This content originally appeared in 3D World magazine. Subscribe to 3D World at Magazines Direct.
A fantastic opportunity for a Senior AEM Developer to join a team of certified developing the next generation of software systems for the business’s Group’s future driving machines.
You will be part of the DevOps team or Analytics Feature team and responsible for AEM development and integrating Adobe Analytics with AEM sites
Core understanding of and working experience with:
Beneficial skills in addition to the above:
Reference Number for this position is GZ53498 which is a long-term contract position rotating between Midrand, Menlyn, Rosslyn and Home Office rotation offering a contract rate of between R600 and R750 per hour negotiable on experience and ability. Contact Garth on [Email Address Removed] or call him on [Phone Number Removed]; to discuss this and other opportunities.
Are you ready for a change of scenery? The e-Merge IT recruitment is a specialist niche recruitment agency. We offer our candidates options so that we can successfully place the right developers with the right companies in the right roles. Check out the e-Merge website [URL Removed] for more great positions.
Do you have a friend who is a developer or technology specialist? We pay cash for successful referrals!
Desired Skills:
Desired Work Experience:
Desired Qualification Level:
Dependency confusion is becoming a serious cybersecurity threat. Learn which organizations are at risk and how to protect systems against these attacks.
Application development often requires the integration of third-party or open-source dependencies for efficient functionality and support of other features. However, there is now a reason for security professionals to be concerned about dependencies, as attackers can introduce malicious codes into applications through them.
Dependency confusion attacks are relatively new, though these cybersecurity threats have already shown they can cause a great deal of havoc to organizations. We share specifics from new security research about dependency confusion attacks, as well as explain how these attacks work, who is most at risk and how to mitigate them.
Jump to:
New research from OX Security, a DevOps software supply chain security company, revealed that almost all applications with more than one billion users and more than 50% of applications with 30 million users are using dependencies that are vulnerable to dependency confusion attacks. The research also showed that organizations at risk are more likely to have 73% of their assets exposed to dependency confusion attacks.
The OX Security report’s findings are similar to a report earlier this year from Orca Security that found about 49% of organizations are vulnerable to a dependency confusion attack.
One notable example of a dependency confusion attack is the PyTorch malicious dependency package reported by PyTorch in December 2022. The organization warned users of a possible compromise of their Python Package Index code repository. In this incident, attackers installed a malicious dependency on their PyPI code repository and ran a malicious binary to enable them to launch a supply chain attack.
Another related incident occurred in 2022 when an attacker injected malicious code into the popular open-source package node-ipc. Within the period of this incident, millions of files were wiped from computers located in Russia and Belarus.
In a dependency confusion attack, the attacker uploads a software package with the same name as an authentic one in your private repository to a public package repository. Having a software package with the same name in both private and public repositories can trick developers into using a malicious version of the package. When developers mistakenly fall for this or their package managers search the public repositories for dependency packages, their legitimate app could install malicious code that the hacker can exploit to launch an attack.
Dependency confusion is a form of supply chain issue. This subject attracted attention in 2021 when security researcher Alex Birsan disclosed in a Medium post that he breached more than 35 major companies, including Apple, Microsoft, Yelp and PayPal, using dependency confusion techniques.
For dependency confusion to work, the hacker first identifies a package name in the private repository and registers the same package name in the public repository so that when a new update to the application is installed, it hooks with the malicious version on the public registry instead of the safe one in the private registry.
Speaking to TechRepublic, OX Security CEO and Co-Founder Neatsun Ziv explained that because hackers understand that most application package managers, such as npm, pip and RubyGems, check for dependencies on the public code repository before the private registry, they try to register the same package names in your private registry on the public registry. For instance, if a developer wants to install a package hosted on their private or internal repository but can’t reach the private repository where it’s stored, the developer’s dependency manager will attempt to find a similarly named package on a public registry and use that instead.
Figure A
OX Security’s study, which examined more than 54,000 repositories in over 1,000 organizations across a wide range of sectors, including fintech, media and SaaS companies, found that organizations of all sizes are exposed to dependency confusion attacks. Ziv explained that most organizations are at risk because they use vulnerable packages or free-to-register public registries, which are vulnerable to dependency confusion attacks.
“These findings of our latest research are deeply disturbing, as these types of attacks not only compromise the integrity and security of organizational assets, but they potentially impact those organizations’ employees and users globally. Moreover, the fact that when an organization is at risk, a staggering 73% of their assets are vulnerable really sheds light on just how exposed many organizations, regardless of size or industry, really are,” said Ziv.
According to Ziv, the most effective means to prevent dependency confusion is to reserve private package names in the public registry so nobody can register them in the public registry. Software developers can do this by going to package manager sites such as npm, if they’re using JavaScript, and then creating their account and registering the package name. By doing this, developers can prevent the attack at the source (i.e., the public repository) while also limiting the number of human error risks that expose their projects to dependency confusion attacks. Some of these human error risks include the lack of adequate code review, misconfigured build systems, lack of security best practices and unvalidated external dependencies.
Another way developers can deal with dependency confusion is by validating the package source before installing new packages or updating to an updated version. Fortunately, many package managers allow you to view a package before installing it.
Software developers can also prevent dependency confusion by using package managers that allow the use of prefixes, IDs or namespaces when naming their packages. This practice ensures that internal dependencies are fetched from private repositories.
We're thrilled to announce the return of GamesBeat Summit Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here.
Nvidia said the Omniverse platform will leverage the OpenUSD framework and generative AI (AI) to accelerate the creation of virtual worlds and advanced workflows for industrial digitalization.
Nvidia rolled out a bunch of announcements related to its Omniverse platform, which is unifying platform for the industrial metaverse, at the Siggraph computer graphics event in Los Angeles.
Nvidia CEO Jensen Huang made a bunch of announcements about RTX workstation graphics chips, generative AI tools, and Nvidia’s contributions to the Open Universal Scene Description (OpenUSD) 3D file format for open and extensible 3D worldbuilding.
Rev Lebaredian, vice president of Omniverse and Simulation Technology at Nvidia, said in an interview with VentureBeat that said that generative AI is going to provide a big boost to Omniverse.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
“Nvidia AI and Omniverse are two are distinct platforms,” Lebaredian said. “But they’re linked. They support each other. We can’t predict exactly when AI is going to be smart enough to do some of the things we want. There’s no way to know. It’s all research. We’re pleasantly surprised that the large language models and things that they’re doing with ChatGPT show the world happened a little bit earlier than most people expected. And so now we’re harnessing that for for Omniverse.”
Lebaredian said that with AI today, the big change is that large language models are encapsulating a lot of knowledge and it seems like AI is understanding what humans are searching for.
“This changes everything for USD and what we’re doing with 3D,” he said. “One of the fundamental changes is the ability to discern natural language and a human’s intent. Not a lot of people in the world have a deep understanding of how a computer works and how computer languages work. Not a lot of people in the world can write software programs. But what we’re seeing with ChatGPT, and in all these other models, is that they’re actually quite good at writing software, which democratizes the ability to program. It feels like it’s not going to be long until virtually everyone who has access to a computer is essentially going to be a programmer. You just tell it what to do.”
“With the creation of 3D virtual worlds, this is invaluable,” Lebaredian said. Being able to write programs that that assist you in generating the worlds is awesome. So what we announced there is where we’re working on training a model that specialized with USD with code that calls into the USD API’s. We call those code snippets. So that’s so that anyone could potentially become a developer for USD.”
He said you can use prompts to do things like find all objects in the scene. You can tell it to find all objects in the scene that are larger than a certain amount or have metallic material. Normally, you’d have to write a Python script or some C++ to do something like that.”
Omniverse, Nvidia’s platform for building and connecting 3D tools, received a major upgrade. New connectors and advancements showcased in Omniverse foundation applications enhance the platform’s efficiency and user experience. Notable updates include Omniverse USD Composer, which allows users to assemble large-scale, OpenUSD-based scenes, and Omniverse Audio2Face, which provides generative AI APIs for realistic facial animations and gestures.
Popular applications such as Cesium, Convai, Move AI, SideFX Houdini, and Wonder Dynamics are now seamlessly connected to Omniverse via OpenUSD, expanding the platform’s capabilities.
Rev Lebaredian, vice president of Omniverse and Simulation Technology at Nvidia, said in an interview with GamesBeat that there’s growing demand for connected and interoperable 3D software ecosystems among industrial enterprises.
He emphasized that the latest Omniverse update empowers developers to harness generative AI through OpenUSD, enhancing their tools. Moreover, the update enables enterprises to build larger, more complex world-scale simulations, serving as digital testing grounds for industrial applications.
Key improvements to the Omniverse platform include enhancements to Omniverse Kit, which serves as the engine for developing native OpenUSD applications and extensions. Moreover, Nvidia introduced the Omniverse Kit Extension Registry, a central repository for accessing, sharing, and managing Omniverse extensions.
This registry empowers developers to easily customize their applications by enabling them to turn functionalities on and off. Additionally, extended-reality developer tools were introduced, enabling users to incorporate spatial computing options into their Omniverse-based applications.
With over 600 core Omniverse extensions provided by Nvidia developers can now build custom applications with greater ease, enabling modular app building.
The Omniverse update also brings new developer templates and resources that provide a headstart for developers starting with OpenUSD and Omniverse, requiring minimal coding to get started. Rendering optimizations have been implemented to fully leverage the capabilities of Nvidia’s Ada Lovelace architecture enhancements in Nvidia RTX GPUs. The integration of DLSS 3 technology into the Omniverse RTX Renderer and the addition of an AI denoiser enables real-time 4K path tracing of massive industrial scenes.
Another key highlight of the update is the native RTX-powered spatial integration, which allows users to build spatial-computing options directly into their Omniverse-based applications. This integration provides users with flexibility in experiencing their 3D projects and virtual worlds.
The Omniverse platform update includes upgrades to various foundation applications, which serve as customizable reference applications for creators, enterprises, and developers. One such application is Omniverse USD Composer, which enables users to assemble large-scale OpenUSD-based scenes. Omniverse Audio2Face, another upgraded application, offers access to generative AI APIs for creating realistic facial animations and gestures from audio files, now including multilingual support and a new female base model.
Several customers have already embraced Omniverse for various digitalization tasks. Boston Dynamics AI Institute is utilizing Omniverse to simulate robots and their interactions, facilitating the design of novel robotics and control systems.
Continental, a leading company in automotive and autonomous systems, leverages Omniverse to generate physically accurate synthetic data at scale for training computer-vision AI models and performing system-integration testing in its mobile robots business.
Volvo Cars has transitioned its digital twin to be OpenUSD-based, providing immersive visualizations to aid customers in making online purchasing decisions. Marks Design, a brand design and experience agency, has adopted Omniverse and OpenUSD to streamline collaboration and enhance animation, visualization, and rendering workflows.
The latest release of the Omniverse platform is currently available in beta for free obtain and testing. Nvidia plans to launch the commercial version of Omniverse in the coming months, offering subscription plans and enterprise support to meet the needs of businesses and organizations.
With the major update to Omniverse, Nvidia aims to empower developers, creators, and industrial enterprises with advanced 3D pipelines and generative AI capabilities.
The platform’s integration with popular applications, improved developer resources, and expanded ecosystem partnerships are set to drive innovation in the fields of industrial digitalization, robotics, autonomous systems, computer graphics, and more
In a major leap towards unlocking the next era of 3D graphics, design, and simulation, Nvidia formally announced its participation in the Alliance for OpenUSD, joining forces with industry giants Pixar, Adobe, Apple, and Autodesk. This collaboration ensures compatibility in 3D tools and content across industries, paving the way for digitalization. (Intel and Advanced Micro Devices have yet to join this effort).
Nvidia also launched three new desktop workstation Ada Generation GPUs: the NVIDIA RTX 5000, RTX 4500, and RTX 4000. And Shutterstock, a creative platform for image creators, unveiled its utilization of generative AI in 3D scene backgrounds.
Leveraging Nvidia Picasso, a cloud-based foundry for building visual generative AI models, Shutterstock trained a foundation model that can generate photorealistic, 8K, 360-degree high-dynamic-range imaging (HDRi) environment maps.
This development enables quicker scene development for artists and designers. Additionally, Autodesk announced its integration of generative AI content-creation services, developed using foundation models in Picasso, with its popular software Autodesk Maya.
Nvidia Studio Driver releases, which provide optimal performance and reliability for artists, creators, and 3D developers, also received an update. The August Nvidia Studio Driver includes support for updates to Omniverse, XSplit Broadcaster, and Reallusion iClone, ensuring peak reliability for users’ favorite creative applications.
Adobe and Nvidia strengthened their collaboration across Adobe Substance 3D, generative AI, and OpenUSD initiatives. They announced plans to make Adobe Firefly, Adobe’s family of creative generative AI models, available as APIs in Omniverse, thereby enhancing the design processes of developers and creators.
The new Nvidia RTX 5000, RTX 4500, and RTX 4000 Ada Generation professional desktop GPUs, featuring Nvidia’s Ada Lovelace architecture, deliver improved rendering, real-time interactivity, and AI performance in 3D applications.
These GPUs incorporate third-generation RT Cores for enhanced image processing and fourth-generation Tensor Cores for AI training and development. With large GPU memory and advanced video encoding capabilities, the GPUs are tailored to meet the demands of high-end creative workflows.
During the event, Nvidia artist Andrew Averkin showcased his work, “Natural Coffee,” which exemplified the fusion of art and technology. Averkin utilized AI to generate visual ideas and employed Nvidia’s GPUs and Omniverse USD Composer for efficient 3D modeling, lighting, and scene composition. The integration of these tools significantly reduced the time required for creating immersive 3D scenes.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.
We're thrilled to announce the return of GamesBeat Summit Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here.
Nvidia has contributed a number of resources, frameworks, and services aimed at accelerating the adoption of Universal Scene Description (USD), known as OpenUSD, as a standard for 3D content.
OpenUSD is a 3D framework designed to foster interoperability between software tools and data types for creating virtual worlds. It has a chance to become the lingua franca of the 3D content, including the universe of virtual worlds known as the metaverse.
To bolster the development of OpenUSD, Nvidia has introduced a portfolio of cutting-edge technologies and cloud application programming interfaces (APIs), including ChatUSD and RunUSD. These APIs, along with the newly launched Nvidia OpenUSD Developer Program, are poised to drive the widespread adoption of OpenUSD, amplifying its potential impact.
The investments made by Nvidia in OpenUSD build upon its co-founding of the Alliance for OpenUSD (AOUSD), an organization formed in collaboration with industry leaders such as Pixar, Adobe, Apple, and Autodesk. The AOUSD aims to establish standardized OpenUSD specifications, facilitating seamless integration and cooperation across the industry.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
In a keynote at Siggraph, Nvidia CEO Jensen Huang expressed his enthusiasm for OpenUSD, likening its potential to the transformative impact of HTML on the 2D internet. Huang believes that OpenUSD will spearhead the era of collaborative 3D design and industrial digitalization. He said Nvidia is fully committed to advancing and promoting OpenUSD through the development of Nvidia Omniverse and generative AI.
As part of its OpenUSD initiative, Nvidia has introduced four new Omniverse Cloud APIs, developed in-house, to enable developers to seamlessly implement and deploy OpenUSD pipelines and applications:
RunUSD: This cloud API utilizes path tracing to render fully path-traced images by checking the compatibility of uploaded OpenUSD files against different OpenUSD releases. Developers can generate renders using Omniverse Cloud, and a demo of the API is available for developers in the Nvidia OpenUSD Developer Program.
ChatUSD: Powered by the Nvidia NeMo framework, ChatUSD is a large language model (LLM) agent capable of generating Python-USD code scripts from text and answering USD-related queries. This groundbreaking model will assist developers and artists in working with OpenUSD data and scenes, democratizing OpenUSD expertise. ChatUSD will soon be available as an Omniverse Cloud API.
DeepSearch: DeepSearch is an LLM agent designed for fast semantic search through vast untagged asset databases, streamlining asset discovery and retrieval.
USD-GDN Publisher: This one-click service empowers enterprises and software developers to publish high-fidelity, OpenUSD-based experiences to the Omniverse Cloud Graphics Delivery Network (GDN). It allows real-time streaming to web browsers and mobile devices, enhancing accessibility and user experience.
In addition to these cloud APIs, Nvidia is also focusing on evolving OpenUSD functionality to address diverse industrial and perception AI workloads. Nvidia is developing geospatial data models that enable the creation of simulations and calculations for true-to-reality digital twins of factories, cities, and even the Earth. To ensure accurate representation, these models account for the curvature of the planet.
To enable seamless integration of diverse datasets, Nvidia is working on a metrics assembly for OpenUSD, allowing users to combine different datasets accurately. Furthermore, Nvidia is developing SimReady 3D models that incorporate realistic material and physical properties. These models are crucial for training autonomous robots and vehicles, enabling them to interact with the physical world accurately.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.
Awesome prospect is available for a Senior AEM Developer (Adobe Experience Manager) whizz!!
The international team offers opportunities to travel to Germany as well as world leading premium cutting-edge technologies while working alongside like-minded expert engineers and developers. Their next-generation driven thinking strives to push innovative boundaries in the manufacturing industry every day.
If you are a go-getter and seeking a challenge, this role is for you!!! APPLY NOW!!
They need seniors with at least 8+ Years exposure to and experience with Adobe Experience Manager Development as well as:
Reference Number for this position is GZ53498 which is a long-term contract position which will rotate between Midrand/Rosslyn and working from home, offering a contract rate of between R600 to R750 per hour negotiable on experience and ability. Contact Garth on [Email Address Removed] or call him on [Phone Number Removed]; to discuss this and other opportunities.
Are you ready for a change of scenery? The e-Merge IT recruitment is a specialist niche recruitment agency. We offer our candidates options so that we can successfully place the right developers with the right companies in the right roles.
Check out the e-Merge website [URL Removed] for more great positions.
Do you have a friend who is a developer or technology specialist? We pay cash for successful referrals!
Desired Skills:
Desired Work Experience:
Desired Qualification Level:
IBM makes plans with Meta's Llama 2. Plus, why open source may or may not be a differentiator in generative AI.
The race for territory in the generative AI-as-a-service world continues as IBM partners with Meta’s open-source large language model, Llama 2. Watsonx is a generative AI foundation model platform, while watsonx.ai is the studio for building and fine-tuning foundation models, including generative AI and machine learning applications.
“IBM believes a platform is only as valuable as the ecosystem it enables,” said Tarun Chopra, vice president of product management for IBM Data and AI, in an email to TechRepublic. “All of the world’s enterprises and their customers — not just a select few — should benefit from foundation models and generative AI.”
Jump to:
Users of IBM’s Watsonx data enterprise platform now have access to Meta’s Llama 2, IBM announced on July 18. Depending on the version, Llama 2 can be up to a 70-billion parameter model. It was trained on 2 trillion tokens of data from publicly available sources.
Including Llama 2 in Watsonx is part of a business strategy of portraying and providing “open innovation that’s guarded by embedded governance and trustworthy principles,” Chopra said.
Llama 2 is now available within the watsonx.ai studio in early access for select clients and partners. IBM has not yet revealed a date for a full release.
The watsonx.ai prompt lab has a guardrail function users can turn on or off to remove potentially harmful content from both the input and output text.
SEE: Hiring Kit: Prompt Engineer (TechRepublic Premium)
There has been some opposition in the developer community over the use of the term open source in the context of Meta and Llama 2. The Open Source Initiative, one of the standards bodies behind open-source software, is still in the process of defining what open source means when referring to generative AI.
“The value in an AI tool isn’t simply in the model or algorithm,” said Peter Zaitsev, founder of open-source database software company Percona, in an email to TechRepublic. “An equally important element is the data or AI weights (a.k.a. neural net weights) used to train the model for a specific use or application. This aspect is inherently tricky for applying open source principles.”
Until the OSI develops a standard, companies like Meta “are misusing the ‘open’ title for their own benefit,” Zaitsev said.
“Business leaders should seek platforms and models that can safely tap their organizations’ unique data sets and proprietary domain knowledge,” Chopra said.
He also offered the following questions to ask when choosing which generative AI platform to use:
SEE: Generative AI may change the way security professionals see the cloud. (TechRepublic)
Llama 2 competes primarily with GPT-4 and Anthropic’s Claude 2. Other players in the AI platform space include Amazon SageMaker Studio, Google Vertex AI and Microsoft Azure AI.