1Z0-071 PDF Dumps are must for success in actual test

killexams.com 1Z0-071 exam dumps contains Finish Pool of Queries and Answers plus Actual Questions checked and accredited along with referrals and explanations (where applicable). Our focus on collecting the particular 1Z0-071 Questions and Solutions is not simply in order to pass the 1Z0-071 test at the very first attempt but Actually Transform your Knowledge regarding the 1Z0-071 test subjects.

Exam Code: 1Z0-071 Practice test 2022 by Killexams.com team
1Z0-071 Oracle Database 12c SQL

Relational Database concepts
Explaining the theoretical and physical aspects of a relational database
Relating clauses in SQL Select Statement to Components of an ERD
Explaining the relationship between a database and SQL

Restricting and Sorting Data

Applying Rules of precedence for operators in an expression
Limiting Rows Returned in a SQL Statement
Using Substitution Variables
Using the DEFINE and VERIFY commands
Sorting Data

Using Conversion Functions and Conditional Expressions
Applying the NVL, NULLIF, and COALESCE functions to data
Understanding implicit and explicit data type conversion
Using the TO_CHAR, TO_NUMBER, and TO_DATE conversion functions
Nesting multiple functions

Displaying Data from Multiple Tables
Using Self-joins
Using Various Types of Joins
Using Non equijoins
Using OUTER joins

Understanding and Using Cartesian Products
Using SET Operators
Matching the SELECT statements
Using the ORDER BY clause in set operations
Using The INTERSECT operator
Using The MINUS operator
Using The UNION and UNION ALL operators
Managing Indexes Synonyms and Sequences

Managing Indexes
Managing Synonyms
Managing Sequences
Managing Views
Managing Objects with Data Dictionary Views
Using data dictionary views

Retrieving Data using the SQL SELECT Statement
Using Column aliases
Using The SQL SELECT statement
Using concatenation operator, literal character strings, alternative quote operator, and the DISTINCT keyword
Using Arithmetic expressions and NULL values in the SELECT statement
Using Single-Row Functions to Customize Output

Manipulating strings with character functions in SQL SELECT and WHERE clauses
Performing arithmetic with date data
Manipulating numbers with the ROUND, TRUNC and MOD functions
Manipulating dates with the date function

Reporting Aggregated Data Using Group Functions
Restricting Group Results
Creating Groups of Data
Using Group Functions

Using Subqueries to Solve Queries
Using Single Row Subqueries
Using Multiple Row Subqueries
Update and delete rows using correlated subqueries

Managing Tables using DML statements
Managing Database Transactions
Controlling transactions
Perform Insert, Update and Delete operations
Performing multi table Inserts
Performing Merge statements
Use DDL to manage tables and their relationships
Describing and Working with Tables
Describing and Working with Columns and Data Types
Creating tables
Dropping columns and setting column UNUSED
Truncating tables
Creating and using Temporary Tables
Creating and using external tables
Managing Constraints
Controlling User Access
Differentiating system privileges from object privileges
Granting privileges on tables
Distinguishing between granting privileges and roles
Managing Data in Different Time Zones
Working with CURRENT_DATE, CURRENT_TIMESTAMP,and LOCALTIMESTAMP
Working with INTERVAL data types

Oracle Database 12c SQL
Oracle Database approach
Killexams : Oracle Database approach - BingNews https://killexams.com/pass4sure/exam-detail/1Z0-071 Search results Killexams : Oracle Database approach - BingNews https://killexams.com/pass4sure/exam-detail/1Z0-071 https://killexams.com/exam_list/Oracle Killexams : Oracle and Microsoft expand its partnership in revolutionizing database services
database services

(Source – Microsoft)

  • Oracle and Microsoft collaborate to provide direct, streamlined access to Oracle databases on Oracle Cloud Infrastructure for Azure customers.
  • Users can migrate or build new applications on Azure and connect to high-performance and high-availability managed Oracle Database services.

Many organizations are looking to digitalize their businesses and are taking the necessary steps to move their operations to the cloud. Having said that, Oracle Corp and Microsoft Corp recently revealed the general availability of Oracle Database Service for Microsoft Azure.

With this new offering, Microsoft Azure customers can easily provision, access, and monitor enterprise-grade Oracle Database services in Oracle Cloud Infrastructure (OCI) with a familiar experience. Users can migrate or build new applications on Azure and then connect to high-performance and high-availability managed Oracle Database services such as Autonomous Database running on OCI.

What Azure and OCI multi-cloud capabilities can offer

Thousands of customers have relied on Microsoft and Oracle software to run their mission-critical applications during the last 20 years. Hundreds of enterprises have used secure and private interconnections in 11 different global regions, including Singapore since Oracle and Microsoft teamed up to launch the Oracle Interconnect for Microsoft Azure in 2019.

“Microsoft and Oracle have a long history of working together to support the needs of our joint customers, and this partnership is an example of how we offer customer choice and flexibility as they digitally transform with cloud technology. Oracle’s decision to select Microsoft as its preferred partner deepens the relationship between our two companies and provides customers with the assurance of working with two industry leaders,” said Corey Sanders, corporate vice president, Microsoft Cloud for Industry and Global Expansion.

As such, Microsoft and Oracle are extending this collaboration to further streamline the multi-cloud experience with Oracle Database Service for Microsoft Azure. In fact, this is a component of Oracle’s strategy to support customers by providing them with the cloud services they require, wherever they need them, according to Leo Leung, vice president of product management at Oracle, during a press briefing recently.

“In many cases, the public cloud is perfect for the workload or set of tasks that the customer wants to perform and we’re continuing to expand that. We currently have 39 hyperscale cloud regions worldwide locations and [further expansion is planned].”

The public cloud isn’t the only one that customers are interested in or request for, he continued. Being able to provide services on-premises is crucial since between 60% and 80% of workloads, particularly mission-critical workloads, are still on-premises. Recently, they did this with Dedicated Region announcements.

Dedicated Region is an on-premises cloud that brings all of Oracle’s cloud services closer to on-premises legacy applications and data that might never move to the public cloud.

Leung stated, “Over the years, the data cloud customer and this database service for Azure is just another component of our overall strategy, in this case, to offer our services across the clouds into Azure.

Modernizing customers’ database services approach

The amount of data that enterprises are gathering, managing, and analyzing on a daily basis has grown enormously as a result of their fast expansion. The use of legacy database models, which are unable to keep up with the rate of business expansion and the resulting rise in data, is a typical data burden.

When asked how easy it is for businesses to transition from a legacy database approach to a modern one, Leung said that customers have a choice. “If customers feel comfortable with moving to a completely automated service and serverless type of approach where all the common management tasks are completed for them, there is an option to use the autonomous database. That’s part of why we offer multiple flavors – to deliver customers the options of continuing with their existing system or moving to something that’s more “hands-off”,” he explained.

Are there risks behind not modernizing their database approach?

Steve Zivanic, Global VP Database and Autonomous Services, Oracle, claims that each business has its own unique set of business requirements. “Some customers continue to want to run on-premises, on Exadata database machines. There are some customers that want to move to a cloud on-premises environment like Exadata Cloud at Customer or Dedicated Region. Then, you have some customers who want to go fully public cloud or a combination of all 3,” he said.

The customer can move at their own pace depending on their business requirements, and most significantly, what their applications dictate what their applications require, according to his conclusion that Oracle is highly open and collaborative in relation to solving their business criteria.







Sun, 24 Jul 2022 12:51:00 -0500 Muhammad Zulhusni en-US text/html https://techwireasia.com/2022/07/oracle-and-microsoft-expand-its-partnership-in-revolutionizing-database-services/
Killexams : Oracle partners with Microsoft to launch database service for Azure

Oracle and Microsoft on Wednesday said that they were jointly launching a new service, dubbed Oracle Database Service for Azure, that will allow Azure customers direct access to Oracle databases running on Oracle Cloud Infrastructure (OCI).

The new offering, which is based on a three-year-old relationship between the two companies that allowed their common customer enterprises to run workloads across Microsoft Azure and Oracle Cloud to reduce latency, is a managed service that enables enterprises to provision and manage Oracle databases running on OCI using an Azure-native API and console, said Kris Rice, vice president of software development for Oracle Database.

This means that enterprises can monitor Oracle databases right from within the Azure environment.

"What we did was we took all the metrics and logs that are naturally produced by the servers on the Oracle OCI cloud, and we're cloning them automatically over to the user side to try and deliver customers that single pane view over their entire stack," Rice said.

There is no charge for the Oracle Database Service for Microsoft Azure, the Oracle Interconnect for Microsoft Azure, or data transfer when moving data between OCI and Azure, the companies said, adding that enterprises will need to pay for the other Azure or Oracle services they consume, such as Azure Synapse or Oracle Autonomous Database.  

Reducing complexity for CIOs, developers

The jointly developed service, which is generally available now, will reduce complexity for developers, CIOs, data scientists and engineers, according to analysts.

Copyright © 2022 IDG Communications, Inc.

Wed, 20 Jul 2022 02:44:00 -0500 en text/html https://www.infoworld.com/article/3667443/oracle-partners-with-microsoft-to-launch-database-service-for-azure.html
Killexams : Cloud computing: Oracle and Microsoft make your database look like it's part of Azure

For big businesses, Microsoft has become the cloud-computing provider of choice. Many of these companies, however, still use Oracle databases to run core parts of their business. 

The two tech giants have already seized on that overlap, creating an interconnect that offers direct network connectivity between Microsoft Azure and Oracle Cloud. They're taking the partnership one step further now, building a new service that makes it easier to leverage that interconnect. The Oracle Database Service for Microsoft Azure effectively serves as a portal that lets joint customers use Oracle database services that look and operate as if they were a native part of Azure. 

"The things that you would traditionally do with a database service should be available, by default, in Azure," Karan Batta, VP of Oracle Cloud Infrastructure, said to ZDNet. "You can squint a little bit and basically combine the two clouds. We think of it as one experience."

SEE: What is cloud computing? Everything you need to know about the cloud explained

For the past couple of years, Oracle has been making it easier to use its products with other cloud providers -- a kind of "if you can't beat them, join them" approach to the cloud. While Oracle has been a major force in enterprise technology for decades, it was late to the game when it came to offering public cloud services. 

Even if that weren't the case, extending its services beyond its own cloud makes sense, given that most businesses have already adopted a multi-cloud approach. Businesses like Snowflake have become extremely valuable because they help organizations move data across different clouds. 

The long-term vision for the new service, Batta said, is for it to be fully integreated into Microsoft Azure -- just like Snowflake. 

"We've built a facade that looks and feels and operates like Azure, but we could throw that away, and Azure would just be able to integrate directly into this," he said. 

The new portal is an extension of Oracle Cloud Infrastructure (OCI), so everything launched there communicates with OCI -- but also with Azure. At launch, customers can use it to access three of Oracle's major services: its Autonomous Database, its basic database service and Exadata Database Service. Later in the year, Oracle will add MySQL HeatWave. 

Since launching their joint interconnect service about a year-and-a-half ago, customers have been using it to move data between the two different cloud providers. The service already has more than 300 organizations using it. Customers could use the interconnect to build applications across the two clouds, but they would have had to do all the heavy lifting. This new service, however, will make it easier to maintain workloads that leverage both OCI and Azure. 

With the interconnect, Batta said, "we hoped customers would treat this as a single cloud, almost. But now we actually have the capability to do that."

The multi-cloud control plane that lets you operate the new service is designed to look like an Azure service. It provides a complete view of your data and applications. A customer could, for instance, use it to monitor their compute nodes in Azure, app analytics in Microsoft's Power BI and an Oracle database. 

If a customer prefers, they can punch out of the interface and return to the Oracle console. Conversely, an Azure customer using this service would never have to go to the Oracle console if they didn't want to. The control panel offers metrics and observability, and the service offers joint support from Microsoft and Oracle. 

While the service brings OCI services closer to Microsoft, Oracle intends to bring its services closer to other clouds as well. It's also exploring bringing Microsoft Azure services closer to OCI.

Wed, 20 Jul 2022 00:15:00 -0500 en text/html https://www.zdnet.com/article/cloud-computing-oracle-and-microsoft-make-your-database-look-like-its-part-of-azure/
Killexams : Oracle, Microsoft deepen cloud ties

Oracle and Microsoft announced this week the release of Oracle Database Service for Microsoft Azure. The new service, the latest cloud collaboration between the two enterprise software giants, enables Azure customers to provision, access, and monitor Oracle database services operating on Oracle Cloud Infrastructure (OCI). Users can migrate or build entirely new apps on Azure and connect to managed Oracle Database services using the new offering. Microsoft said it won’t run the meter for Azure customers to move data between the two services, either. 

Microsoft and Oracle first announced Oracle Interconnect for Azure in 2019. Since then, enterprise cloudification efforts have moved with increasing velocity to hybrid cloud. This service acknowledges the new multi-cloud reality with simplified connectivity between the two disparate cloud environments. 

For Microsoft, threading the edge between the two clouds means a simplified interface and easier integration on the Azure side. That includes automatic configuration to link the two clouds. Microsoft federates Azure Active Directory identities associated with OCI databases, as well as an Azure-fluent OCI services dashboard. 

What’s more, Microsoft imposes no charges to use the Oracle Database Service, the Oracle Interconnect, or data egress or ingress when data is moved between OCI and Azure, said the company. Instead, customers pay for other Azure or Oracle services they need, like Azure’s Synapse analytics service, or Oracle’s own cloud-based Autonomous Databases

Customers will pay only for the other Azure or Oracle services they consume, such as Azure Synapse or Oracle cloud-deployed Autonomous Databases.

Clay Magouryk, executive vice president of Oracle Cloud Infrastructure, said the new offering will dispel the belief that it’s difficult to run “real applications across two clouds” without having in-depth knowledge of both.

“There is no need for deep skills on both of our platforms or complex configurations—anyone can use the Azure Portal to get the power of our two clouds together,” said Magouryk.

Both hyperscalers seem on an ultimate collision course to complete for 5G standalone (SA) services. Microsoft’s Azure for Operators is going squarely after the same turf as Oracle Communications. Oracle group vice president of technology Andrew De La Torre told RCR Wireless News in April, “The 5G standalone core network was always the main act in this show.”

For Oracle, it’s a chance to start with a completely fresh page.

“We decided from the very outset to build our 5G solutions cloud native from the ground up — with no repurposed legacy code — because we firmly believed that the cloud native capabilities of our products are a critical part of what carriers will need,” said De La Torre. 

“At a 5G network level, we focused on then building only the components that we felt we could excel at, and perhaps more importantly, represented the most critical components of a carrier’s 5G network. As a result, we zeroed in on the control plane of the standalone core network,” he added.

That fresh sheet approach is, perhaps, a conceptual counterpoint to Microsoft Azure for Operators: AT&T’s former Network Cloud group, which Redmond acquired in 2021.

Competitors one day, cooperators the next, coopetition only on the days ending with y. The cloud draws no distinction. Both hyperscalers were lauded by Ukraine in early July along with Amazon and Google for aiding the Ukrainian government in its emergency efforts to move critical data and workflows to the cloud, literally and figuratively out of the way of invading Russian forces.

The “Distinction of the World” award was created by Ukrainian president Volodymyr Zelenskyy to identify those businesses and world leaders who have supported Ukraine since its invasion by the Russian Federation. Ukrainian Digital Transformation Minister Mykhailo Fedorov gave the awards to representatives of each of the companies in June and July in recognition of their efforts.

Fri, 22 Jul 2022 04:46:00 -0500 en-US text/html https://www.rcrwireless.com/20220722/telco-cloud/oracle-microsoft-deepen-cloud-ties
Killexams : Enterprise Resource Planning (ERP) gains ground in supply chain management By ·

All-encompassing systems that manage inventory, procurement, manufacturing, orders, projects, human resources and other core capabilities for companies, enterprise resource planning (ERP) platforms continue to expand right along with their customers’ needs. They’ve come a long way since manufacturers started using them to manage inventory in the 1960s, and were officially named “ERPs” by Gartner in the 1990s.

Fast-forward to 2022 and ERP software capabilities include all of the above plus supply chain, logistics, product lifecycle, risk and maintenance management (to name a few). And if a certain capability doesn’t come “built into” the ERP, there are always application programming interfaces (APIs) available to connect the two and create a unified platform that shares the same data, insights and capabilities.

With the latest spate of supply chain disruptions, transportation snarls and labor shortages creating a bigger need for supply chain management (SCM) functionalities, ERP vendors have responded by bolstering their offerings in this realm. Concurrently, the best-of-breed SCM vendors have stepped up to the plate and refined their offerings, added new functionalities and even made them easier to connect to outside applications.

On track to hit $78.4 billion in revenues by 2026—up from $38.8 billion in 2018—the global ERP market is being driven by a greater need for operational efficiency and transparency in business processes, increased use of Cloud and mobile applications, and high demand for data-driven decision-making.

Increased demand from small- to mid-sized businesses plus the ongoing technological advancements on the part of vendors are also accelerating the adoption rates of these multi-faceted software platforms.

Staying at the forefront

Now squarely in their third year of a global pandemic, shippers of all sizes and across all industry sectors are investing in technology to help them address current challenges and begin to plan for the future.

Those with ERPs in place are enabling more functionalities—many of them related to SCM—while others are implementing platforms that help them work smarter, better, and faster in an uncertain business environment.

“In general, there’s definitely more of a focus on supply chain than there has been in the past. It’s at the forefront of everybody’s mind right now and will likely stay there for at least the next few years,” says Bill Brooks, VP, NA transportation portfolio at Capgemeini. In response to these needs, he says both the ERPs and the best-of-breed SCM vendors are investing in more digitalization, Cloud computing, Artificial Intelligence (AI), digital twins, analytics and other advanced technologies that converge to help shippers develop smooth-running, end-to-end supply chains.

For now at least, Brooks sees plenty of room in the marketplace both for broader-reaching ERP vendors and more specialized best-of-breed software developers. They both continue to invest in their platforms and serve their respective markets, he adds.

“Everyone has their preferences as to what type of software they want,” says Brooks, “and those preferences probably aren’t changing in the short-term.”

ERPs dive deeper into WMS

As companies continue to work out their current inventory, labor, and transportation issues, more attention is being paid to the warehouses and distribution centers (DCs) that receive and stock goods and then ship orders. With more customers demanding ultra-fast shipping and eMarketer expecting another 14.8% increase in U.S. retail e-commerce sales this year, warehouse management systems (WMS) have been getting more attention and investment, both on the part of shippers and ERP vendors.

“The ERPs are starting to see WMS as an application area that’s worth pushing further,” says Clint Reiser, director of supply chain research at ARC Advisory Group. In some cases, ERP vendors are developing and then offering WMS to their current customers or “install” bases. In other examples, they’re selling the SCM application to customers that are outside of their install bases.

“This holds true with Oracle and possibly SAP as well,” says Reiser, who adds that Oracle has recently signed on customers for its Cloud-based WMS and then had those users also adopt its Cloud transportation management (TMS) platform. “In the past, it’s almost exclusively been TMS first and then WMS as an add-on,” he points out.

Overall, Reiser says WMS is becoming a “higher priority” for ERP vendors like Infor, Oracle and SAP. He points to the pandemic-related challenges plus the rise in e-commerce with driving at least some of this interest. “The WMS application area may be getting more emphasis from the vendors because of its greater interest out in the market,” he explains, “due to e-commerce, the COVID-related disruptions and shortages, and the broader supply chain crises.”

Protecting their turfs

Roll the clock back about 10 years and Reiser remembers that many of the best-of-breed SCM vendors were in the early stages of building out their platforms, with JDA working on its “Supply Chain Process Platform” and Manhattan Associates introducing its SCALE offering. Other vendors followed suit.

As technology advanced, the introduction of microservices—software made up of small, independent services that use APIs to communicate with one another—further enabled integration capabilities in the Cloud. This evolution facilitated the exchange of information between adjacent applications like TMS, WMS, distributed order management (DOM) and others.

Ultimately, these advancements gave best-of-breed SCM vendors the power they needed to be able to create more integrated, end-to-end processes. No longer just “standalone” applications, these SCM solutions could now work in tandem with other best-of-breed applications and/or with larger enterprise solutions. Borrowing a term from the business strategy sector, Reiser says this helped best-of-breed vendors construct “moats” around their applications.

“[Microservices] help specialty software vendors keep others off of their turf and solidify their place in the [market],” says Reiser, who sees the use of microservices in SCM continuing. “Now, some of them are using microservices to build up their solutions with a common database that allows them to compete on the basis of end-to-end supply chain unification.”

There’s room for both

Looking around at the ERP space, Dwight Klappich says vendors operating in it have matured their supply chain capabilities, warehousing, transportation and other components in order to address the basic needs of a high percentage of their customers.

“For companies that don’t have the most complex or sophisticated needs, ERP supply chain solutions are well worth consideration,” says Klappich, senior director, supply chain research at Gartner, Inc. “In most cases, if you’re committed to an ERP platform like Oracle, SAP or Microsoft, you probably should shortlist your ERP vendor.

Shippers that need more robust software capabilities would be wise to broaden that scope and add best-of-breed solutions to those shortlists. “There’s a market out there for best-of-breed solutions,” says Klappich, “and room for both the ERPs and the specialty SCM vendors.” For example, he says Gartner’s 2022 Magic Quadrant for WMS is dominated by six vendors, three of which are ERPs (SAP, Oracle, and Infor) and the other three are supply chain suites (Blue Yonder, Manhattan, and Korber).

In some cases, Klappich says the ERPs have an advantage because they can invest in new functionalities that can be effectively leveraged across the entire software suite. This creates economies of scale on the research and development (R&D) front, where ERPs that invest in SCM can then leverage those advancements across their entire platforms.

Take analytics, for example. According to Klappich, Oracle, SAP and Infor have all invested in robust analytics platforms that can be used across all of their applications. Specialty vendors, on the other hand, either have to try to replicate that investment or partner/integrate with a third party application provider that does offer those capabilities.

Right now, Klappich says the ERP and best-of-breed vendors share an important goal of improving the user experience. He points to Oracle’s introduction of the Redwood Design System in 2020 as one example of this. Redwood is the new Oracle standard for application look and feel, and was implemented company-wide to help unify the user interface of all of the company’s product offerings.

“Some of it is aesthetics, but as a whole Redwood takes into account how to Improve productivity by streamlining the user experience,” says Klappich, “and by factoring in things like conversational voice capabilities and embedded search.” He adds that best-of-breed SCM vendors are taking a similar approach with the goal of improving the user experience, staying out in front of the ERPs, and further protecting their turf.

More seamless flow

Looking ahead, Brooks sees the ERPs continuing to build out their SCM offerings in an attempt to “jump over” the best-of-breed vendors. He also sees the best-of-breeds solidifying their positions in the market by staying nimble and innovative.

“At this point, I don’t see either one of those getting a leg up on the other,” says Brooks, “but I do expect more integrations across different software platforms/vendors plus the continued use of microservices and digitalization to create even more seamless flow that we’ve seen in the past.” 




Mon, 08 Aug 2022 03:28:00 -0500 text/html https://www.logisticsmgmt.com/article/enterprise_resource_planning_erp_gains_ground_in_supply_chain_management
Killexams : Deploying disaster-proof apps may be easier than you think

Interview In the wake of Google and Oracle's UK datacenter meltdowns earlier this week, many users undoubtably discovered that deploying their apps in the cloud doesn't automatically make them immune to failure.

But according to an upcoming report from the Uptime Institute, building applications that can survive these kinds of cloud outages – heat induced or otherwise – doesn't have to be difficult or even all that expensive.

Whether an application remains operational or not in the event of a cloud outage depends entirely on how it's been deployed, Uptime Institute analyst Owen Rogers told The Register in an exclusive interview.

This shouldn't come as a surprise to most, yet Rogers says half of enterprises surveyed were under the impression application resiliency was the cloud provider's responsibility.

"There's still a lack of clarity about who takes ownership of the resiliency issue when it comes to cloud," he explained, adding that while the public cloud providers offer many tools for building resilient applications, the onus is on the user to implement them.

An analysis of these tools showed that achieving high degrees of resiliency was a relatively straightforward prospect – especially when the cost of lost business and cloud SLAs were taken into consideration.

For the report, Rogers investigated seven scenarios for deploying stateless applications in virtual machines in the cloud, each with varying degrees of fault tolerance.

Zone-level resiliency

On its own, a VM running in the cloud provides zero protection in the event of a service, zone, or regional outage.

For those that aren't familiar, cloud regions provide services to large geographic areas and are typically made up of two or more independent datacenters, referred to as availability zones.

By deploying a load-balancer and second VM in an active-active configuration, where traffic is routed across each of the instances, Rogers claimed customers could achieve basic protections against application faults at a cost just 43 percent higher than a single VM.

And for those that can tolerate a 15-minute downtime, an active-failover approach – where the second VM is spun up upon the first's failure – cost just 14 percent more than the baseline.

However, this approach only provides protection in the event of an application fault, and won't do the user any good if the entire zone experiences an outage.

The good news, according to Rogers, is it doesn't cost any more to employ either approach across multiple availability zones within a cloud region, but offers substantially better resiliency – in the neighborhood of 99.99 percent.

"The cloud providers have made it really easy to build across availability zones," he said. "You would have to almost find a reason not to use availability zones considering they're so easily configured, and they provide a good level of resiliency."

In this style of deployment, the application would survive anything short of a complete regional failure. Overcoming that rare occurrence is a bit trickier, Rogers explained.

Multi-region deployments

For one, traffic between cloud regions is often subject to ingress and egress charges. In addition, load balancers alone aren't enough to route the traffic, and an outside service – in the case of Uptime's testing, a DNS server – was required.

"The load balancer can easily distribute traffic between two active virtual machines in the same region," Rogers explained. "When you want to balance it across different regions, that's when you have to use the DNS server."

The investigation explored several applications for multi-region deployments. The first involved mirroring a zoned-based, active-active deployment across multiple regions using DNS to distribute traffic between the two. The approach offered the greatest resiliency – six nines of availability – but it did so at the highest cost: roughly 111 percent greater than a standalone VM.

Rogers also looked at what he called a "warm standby" approach, which used a zone-based, active-active configuration in the primary region and a standalone VM in the failover region. The deployment offered similar availability and resilience as the mirrored regional deployment, at a cost 81 percent higher than the baseline.

Finally, for those that want to hedge their bets against a regional failure, but are willing to contend with some downtime, a regional active-failover approach could be employed. In this scenario, if the primary region failed, the DNS server would reroute traffic to the failover region and trigger the backup VM to spin up. The deployment was also the least expensive multi-region approach explored in the report.

However, the report cautions that if the application was under pressure at the time of the outage, a single VM in the failover region may not be sufficient to cope with the traffic.

"The load balancer always provides a day-to-day level of resiliency, but the DNS level of resiliency is far more of an emergency type," Rogers said.

Because of this, he argues multi-zone resiliency is likely the sweet spot for most users, while a multi-region approach should be carefully considered to determine whether the benefits outweigh the added complexity.

SLAs aren't an insurance policy

What customers should not do is expect SLAs to make up for downtime resulting from a lack of application resiliency.

"The compensation you get is not necessarily proportional to the amount you spent," Rogers explained. "You'd obviously assume the more you spend on the cloud, the greater your compensation will be if something goes wrong – but that's not always the case."

Different cloud services have different SLAs, he said, adding that customers may be surprised to find that in the event of a failure they're only compensated for the service that actually went down, not the application as a whole.

"SLA compensation is poor and is highly unlikely to cover the business impacts that result from downtime," Rogers wrote in the report. "When a failure occurs, the user is responsible for measuring downtime and requesting compensation – it is not provided automatically."

That's not to say SLAs are completely worthless. But customers should think of them as an incentive for cloud providers to maintain reliable services, not as an insurance policy.

It's almost like they're [SLAs] used as a mechanism to punish the cloud provider rather than to refund you for the damage.

More to be done

Rogers's analysis of cloud resilience is far from over. The report is the first in a series that aims to address the challenges associated with highly available cloud workloads from multiple angles.

"We still haven't scratched the surface of how all these other cloud services are actually going to have reliable and assured levels of resilience," he said.

For example, the workloads in this report were stateless – meaning that data didn't need to be synced between the zones or regions, which would have changed the pricing structure once ingress and egress charges were taken into account.

"If there is data transferring from one region to another – for example, because a database is replicating to a different region – that can have a massive cost implication," Rogers explained.

However, as it pertains to VMs, he said, "once you've done the bare minimum, the cost incremental to add more resiliency is fairly slim." ®

Thu, 21 Jul 2022 20:32:00 -0500 en text/html https://www.theregister.com/2022/07/22/building_disasterproof_apps_might_be/?td=rt-9cp
Killexams : Oracle’s India cloud unit targets triple-digit growth in next few years
Oracle Cloud Infrastructure's (OCI) India unit is targeting to grow in triple digits for the next couple of years, on the back of the country’s economic growth and increasing spending of the middle-class on technology, top executives said.

It is also betting big on the financial sector as well as government projects to drive this growth.

“The Indian economy is forecast to expand nearly 8% this year. It is by far above every other economy in the world. If you factor in the size of the economy and also the growth of the middle class, it's getting wealthier,” said Garrett Ilg, president of Japan and Asia Pacific at Oracle.


This comes as Oracle’s India business has been a strong growth engine for the company – with the OCI unit clocking over 100% growth for the third year in succession and the software-as-a-services (SaaS) business also more than doubling for each of the last two years.

Ilg said the number is not an aberration and is sustainable for the next two years. “It (growth) is absolutely because (of) the momentum that is happening in the Indian market,” he told ET on the sidelines of the Oracle India Partner Forum event.

The company views small and medium enterprises, including startups, using its NetSuite, enterprise resource planning, human capital management and data analytics products.

Discover the stories of your interest


The Texas, US-headquartered company identifies the financial sector, due to its competitive landscape, customer base and matured micro payments systems, as a huge opportunity for growth, along with the public sector due to the push to reduce costs and extend efficient services.

Strategic growth area
The tech giant said the public sector and government business was a strategic area for its growth in India.

It has worked in the Niti Aayog's aspirational districts programme of monitoring 112 most under-developed districts in the country and the Income Tax Department has been using the Oracle Marketing Cloud solutions to reach out to taxpayers and to create awareness.

“We are seeing government projects as a strategic area for us and will continue to invest there. Initiatives such as the Diksha programme for e-learning and Open Network for Digital Commerce (ONDC) will be game changers,” said Shailender Kumar, senior vice president and regional managing director, Oracle India.

Shailender Kumar

Within the ONDC, it is working on the secured logistics digital exchange (SLDE) that integrates buyers, sellers, banks and logistic players.

The company is adding a dedicated team to manage SaaS solutions for public sector business, said Kumar.

Oracle recently closed a large project from Uttar Pradesh Power Corporation and is in discussions with state governments such as in Odisha, West Bengal, Haryana and Maharashtra for public sector projects.

Will gain market share
Oracle said it changed the approach to a subscription-based and consumption-based model from chasing big deals with big discounts. It expects this to help close the market share gap with rivals.

“Oracle has modified the cloud market with our focus on consumption. We are actually looking for companies that know what they need now and then they can subscribe to more later. We don't want customers to buy big just to get a discount,” Ilg said.

According to Synergy Research Group, Amazon’s AWS, Microsoft Azure and Google Cloud are three top players in the market as of December 2021 – with a market share of 33%, 21% and 10%, respectively.

The change to subscription-based business model along with its partnership with Microsoft Azure for multi-cloud capabilities and strong security layer in OCI Gen 2 model are “big differentiators” that the Oracle executives said would help the company gain market share.

Last month, Oracle and Microsoft announced the availability of Oracle Database Service for Microsoft Azure. With this new offering, Microsoft Azure customers can access and monitor Oracle Database services in OCI.

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.
Sun, 31 Jul 2022 12:30:00 -0500 en text/html https://economictimes.indiatimes.com/tech/information-tech/oracle-cloud-infra-targets-triple-digit-india-growth-on-financial-sector-boom/articleshow/93257762.cms
Killexams : Cry havoc as the macro-economy leads to decision paralysis, but Rimini Street will be ready to pick up the pieces, says CEO Seth Ravin
Seth Ravin

The macro-economic environment will help Rimini Street in the longer term, says CEO Seth Ravin, but for now it’s creating some headwinds that impacted revenue growth in Q2.

The company turned in quarterly revenue of $101.2 million, up 10.5% year-on-year, with net income of $0.1 million, down from $6.8 million for the comparable period last year. Ravin said:

In line with other companies, we faced global macro-environment and currency exchange rate headwinds that impacted quarter results. We believe that the macro-environment will ultimately benefit our business after organizations complete a replanning adjustment cycle and we're addressing it and other opportunities with changes that include my return to oversee global revenue operations to reaccelerate growth.

He added:

Globally, companies are facing impact to profits caused by continuing post-pandemic supply chain challenges, global macro challenges, including war, sanctions, trade disputes, deglobalization, inflation, rising interest rates and currency exchange rate movements. These macro-shocks were originally believed to be short-term impacts but are now being viewed as likely multi-year headwinds that are forcing organizations to replan their businesses, including their IT investment plans.

The replanning phase has frozen many investment decisions. Frozen IT decisions impacted the market as a whole during the second quarter as reflected in extended and delayed IT sales cycles for many companies, including Rimini Street. However, as previously noted, we believe that once organizations complete their replanning process, Rimini Street is well positioned to ultimately benefit from this macro-environment with growth in new client acquisitions.

Other stats of note from the post earnings analyst call:

  • Annualized Recurring Revenue was $396.7 million for Q2, an increase of 9.6% compared to $362.1 million for the same period last year.
  • Subscription revenue was $99.2 million, which accounted for 98.0% of total revenue for the 2022 second quarter, compared to subscription revenue of $90.5 million for the comparable period.
  • Active Clients as of June 30, 2022 were 2,905, an increase of 9.8% compared to 2,645 Active Clients as of June 30, 2021.
  • In Q2, the firm closed more than 9,411 support cases and delivered nearly 9,813 tax, legal and regulatory updates to clients across 30 countries.
  • Average client satisfaction score was 4.9 out of 5.0.

Customers 

New customers in Q2 included Labeyrie Fine Food, Sajo Systems, SK Networks, Lwart, E-LAND Innople and State Library of Victoria, Australia’s oldest library. Ravin cited a couple of use cases:

The State Library of Victoria, Australia's oldest library trusted Rimini Street for support of their Oracle E-business and Oracle database software. Rimini Street is also providing the library with its advanced database security and advanced application with our security solutions; 2 solutions and its Rimini Protect suite of security products. These solutions provide an innovative approach to security that can block vulnerabilities before an attack or close an attack vector within hours, unlike traditional and old vendor software patch models that can take days, weeks, months or years to receive a patch and require significant testing time and cost to implement.

Rimini Street's cybersecurity solutions provides the library with peace of mind that digital threats are being addressed. Chief Financial Officer, Bradley Vice noted that his finance team are very happy with the support and security that Rimini Street provides, which keeps their assets and their customers secure and their finance services running. He further noted that Rimini Street worked with his team to identify and provide solutions for the risks they face and that the team enjoys the services they are receiving.

He also pointed to Labeyrie Fine Food, a French retailer with facilities in 48 countries:

Labeyrie switched its Oracle JD Edwards and Oracle Database Support to Rimini Street. With 80 application modules connected to the Oracle ecosystem, Labeyrie processes more than 500,000 batches of orders nightly. Maintaining the system became challenging after the software vendor ended full support for this mission-critical system. Labeyrie sought a solution to provide the mission-critical support they needed and create more value for the organization while simultaneously reducing costs.

Labeyrie's CIO, Louis Goffaux, stated that in addition to supporting their ERP system, Rimini Street's experts provide guidance on potential changes to how they use the platform. And that the monthly and quarterly meetings with the Rimini Street are extremely worthwhile, exactly the type of close all-around support they were looking for. He goes on to note that Rimini Street provides an efficient, agile service at half the price they were previously paying the software vendor. 

My take

There are deals that pushed. As you're seeing in so many different company earnings, delayed sales cycles, lengthened sales cycles, we're certainly seeing some of that as well because as these companies and government organizations re-jig their plans in order to prepare for a multi-year potential recession, different type of environment, they are not doing anything because they're not buying anything while they finish those plans. Based on the involvement that we've seen, the kind of work that we're involved with clients and prospects around the world, we feel pretty positive about the fact that we're going to benefit when they finally finish their plans. We intend to be a part of them.

There’s a long game in play here with the underlying message being one of ‘keep calm and carry on’ as what Ravin calls “this daily global macro-drama’ unfolds:

It's havoc on businesses. I mean, there's no other way to say it. When you have so many different question - should I build a factory in China? Am I going to have political issues with that? How do I deal with Eastern Europe? How do I think about energy, if I'm going to build a plant inside of Europe? All these questions are now wreaking havoc to the point of causing internal paralysis for a lot of companies and they're having to step back and rethink. Now that doesn't mean they don't want to save money. That's why I said, ultimately, I expect to prevail

Onwards.

Sun, 07 Aug 2022 21:44:00 -0500 BRAINSUM en text/html https://diginomica.com/cry-havoc-macro-economy-leads-decision-paralysis-rimini-street-will-be-ready-pick-pieces-says-ceo
Killexams : Datadog: What You Need To Know Before The Earnings On Thursday
Shot of a young woman using a digital tablet while working in a server room

Charday Penn/E+ via Getty Images

(Disclaimer: before we start, I'm not a developer or a software engineer. So, if despite my efforts, there are still mistakes in this article, please let me know!)

Datadog? What?

Datadog log

Datadog

Datadog (NASDAQ:DDOG) is not an easy company to understand if you don't work in software (hence the disclaimer above). I will try to dissect the company and hopefully, at the end of the article, you understand what it does and how it makes money. I also take a brief look at the earnings.

An introduction to Datadog and its history

Logo Datadog

Datadog

Datadog was founded in 2010 by Olivier Pomel and Alexis Lê-Quôc, who are still leading the company, Pomel as the CEO, Lê-Quôc as the CTO (Chief Technology Officer). The two Frenchmen are long-time friends and colleagues. They met in the Ecole Centrale in Paris, where they both got computer science degrees.

Olivier Pomel CEO Datadog

Datadog

(Olivier Pomel, from the company's website)

Olivier Pomel is an original author of the VLC Media Player, which a lot of you will know or recognize the logo.

VLC Media Player

VLC Media Player

(The VLC Media Player Icon)

Pomel and Lê-Quôc both worked at Wireless Generation, a company that built data systems for K-12 teachers. For those who don't know the American educational system, K-12 stands for all years between kindergarten and the 12th grade, from age 5 to 18. K-12 has three stages: elementary school (K-5), middle school (K6-8) and high school (K9-12).

Wireless Generation is now called Amplify and it offers assessments and curriculum sources for education to schools. Wireless Generation was sold to Newscorp in 2010, which was the sign for the two friends to go and found their own company. Pomel was VP of Technology for Wireless Generation and he built out his team from a handful of people to almost 100 of the best engineers in New York.

Yes, you read that right, New York. Because Pomel and Lê-Quôc knew many people in the developer scene in New York, Datadog is one of the few big tech companies not based in Silicon Valley. The company's headquarters are still in Manhattan today, on 8th Avenue, in the New York Times building, close to Times Square and the Museum Of Modern Art.

Before Wireless Generation, Pomel also worked at IBM Research and several internet startups.

Alexis Lê-Quôc is Pomel's co-founder, friend, and long-time colleague. He is the current CTO of Datadog.

Alexis Lê-Quôc, CTO Datadog

Datadog

(Alexis, Lê-Quôc, from the company's website)

Alexis Lê-Quôc served as the Director of Operations at Wireless Generation. He built a team as well there and a top-notch infrastructure. He also worked at IBM Research and other companies like Orange and Neomeo.

DevOps, Very Important For Datadog

He has been a long-time proponent of the DevOps movement and that's important to understand Datadog. The DevOps movement tried to solve the problem that many developers and operators worked next to and even against each other, almost acting as enemies. DevOps focuses on how to put them together to make everything more frictionless. Developers often blamed the operational side if there was a problem (for example, the database that was not up-to-date) and operators blamed developers (a mistake in the code). By working together in teams, good communication and even as much integration between the teams as possible, DevOps tried to solve that problem.

The problem was that there was no software for a unified platform for DevOps and Datadog helped solve that problem. If you want to know where the problem is, it's good to have a central observability platform for DevOps and Datadog set it as its task to make that.

As a tech company in New York, Datadog had quite a lot of trouble raising money initially. But once it had secured money, it started building, releasing the first product in 2012, a cloud infrastructure monitoring service, just ten years ago. It had a dashboard, alerting, and visualizations.

In 2014, Datadog expanded to include AWS, Azure, Google Cloud Platform, Red Hat OpenShift and others.

Because of the French origin of the founders, it was natural for them to think internationally from the start. The company set up a large office and R&D center in France to conquer Europe quite early in its history, in 2015 already, just three years after the launch of its first product.

Also in 2015, it acquired Mortar Data, a great acquisition. Up to then, Datadog just aggregated data from servers, databases and applications to unify the platform for application performance. That was already revolutionary at the time. Datadog already had customers like Netflix (NFLX), MercadoLibre (MELI) and Spotify (SPOT). But Mortar Data added meaningful insights to Datadog's platform. This allowed Datadog's customers to Improve their applications constantly.

Datadog really needed this as companies like Splunk (SPLK) and New Relic (NEWR) had done or were in the process of doing the same. Datadog was seen as a competitor of New Relic at the time. To a certain extent, that is still the same today.

In 2017, Datadog did a French acquisition with Logmatic.io, which specialized in searching and visualizing logs. It made Datadog the first to have APM (application performance monitoring), infrastructure metrics and log management on a single platform.

In 2019, Datadog bought Madumbo, another French company. It's an AI-based application testing platform. In other words, because of the self-learning capabilities, the platform becomes more and more powerful in finding weak links and reporting them without the need to write additional code. Instead, it interacts with the application in a way that is as organic as possible, through test e-mails, password testing, and many other interactions while testing everything for speed and functionality. The bot can also detect JavaScript weaknesses. The capability was immediately added to the core platform of Datadog.

Also in 2019, Datadog founded a Japanese subsidiary and in September of 2019, Datadog went public.

Datadog IPO September 2019

CNBC

(The Datadog IPO, source)

Before it had its IPO, Cisco (CSCO) tried to buy Datadog above the range of its IPO price. Pomel about how he thought about this $8B offer:

Wow this is a lot of money! But at the same time I see all this potential and everything else in front of us and there’s much more we can build

Datadog decided not to sell and on the first day that the company traded, it jumped to a valuation of almost $11B.

The name Datadog is a remarkable one. None of the founders had or particularly liked dogs. In Wireless Generation, Pomel and Lê-Quôc, named their production servers "dogs”, staging servers "cats" and so on. “Data dogs” were production databases. There were dogs to be afraid of. Pomel:

“Data Dog 17” was the horrible, horrible, Oracle database that everyone lived in fear of. Every year it had to double in size and we sacrificed goats so the database wouldn’t go down.

So it was really the name of fear and pain and so when we started the company we used Datadog 17 as a code name, and we thought we’d find something nicer later. It turns out everyone remembered Datadog so we cut the 17 so it wouldn’t sound like a MySpace handle and we had to buy the domain name, but it turned out it was a good name in the end.

What Datadog does

Datadog describes what it does as 'Modern monitoring & security'. I could deliver you the explanation of what that means myself, but if founder and CEO Olivier Pomel does a really good job in explaining from a high level what Datadog does here why would I not let him do it, right?

Whenever you watch a movie online or whenever you buy something from a store, in the back end, there’s ten thousand or tens of thousands of servers and applications and various things that basically participate into completing that – either serving the video or making sure your credit cards go through with the vendor and clears everything with your bank.

What we do is actually instrument all of that, we instrument the machines, the applications, we capture all of the events – everything that’s taking place in there, all of the exhausts from those machines and applications that tell you what they’re doing, how they’re doing it, what the customers are doing.

We bring all that together and help the people who need to make sense of it understand what’s happening: is it happening for real, is it happening at the right speed, is it still happening, are we making money, who is churning over time. So we basically help the teams – whether they are in engineering, operations, product or business – to understand what these applications are doing in real time for their business.

In the old days, you had a development team that made an application and it took maybe six months before it was operational. For the next few years, that was it, no changes could be made. If the developers regretted a weakness, they had to wait for a few years, until the next upgrade.

That changed with the cloud. You could now constantly upgrade and developers can easily make changes without going through a whole administrative and technological drag of a process. If you implement a certain code and you think there is a better solution the next day, no problem. Moreover, Datadog will show you what doesn't really work well.

Olivier Pomel gives a few examples of issues Datadog can help its customers with:

There’s a number of things our customers can’t do on their own. For example they don’t know what’s happening beyond their account on a cloud provider. One thing we do for them is we tell them when we detect an issue that is going to span across different customers on the cloud provider. We tell them “hey you’re having an issue right now on your AWS and it’s not just you. It’s very useful because otherwise they have no way to know and they see your screen will go red and they have to figure out why that is.

Other things we do is we‘re going to watch very large amounts of signals that they can’t humanly watch, so we’re going to look at millions of metrics and we’re going to tell them the ones that we know for sure are important and not behaving right now, even if they didn’t know that already, if they didn’t know “I should watch this”, “I should put an alert on that”, “I should go and figure out if this changes”. These are examples of what we do for them.

The problems that Datadog solves

Datadog helps with observability and this in turn limits downtime, controls the development and implementation, finds and fixes problems and provides insight into every detail necessary on a unified platform.

But to make it even more like real life and where Datadog can make a difference, Olivier Pomel has a good way of explaining what problem Datadog exactly solves. He talks about Wireless Generation, where he and Alexis Lê-Quôc were the head of development and operations.

I was running the development team, and he was running the operation team. We knew each other very well, we had worked together, we were very good friends. We started the teams from scratch so we hired everyone, and we had a “no jerks” policy for hiring, so we were off to a good start. Despite all that, we ended up in a situation where operations hated development, development hated operations, we were finger pointing all day.

So the starting point for Datadog was that there must be a better way for people to talk to each other. We wanted to build a system that brought the two sides of the house together, all of the data was available to both and they speak the same language and see the same reality.

It turns out, it was not just us, it was the whole industry that was going this way.

Datadog covers what it calls 'the three pillars of observability': metrics, traces and logs (next to other things).

A metric is something that is a data point that is measured and tracked over time. It's used to assess, compare and track code production or performance.

Traces are everything that has to do with a program's execution, the metadata that connects everything. When you clicked on this article, that took you from the link in your mail to here but this is being retrieved from a database. Those connections can be found in traces. Traces are often used for debugging or making the software better.

Logs are events that are being generated by any of the participants in any of the systems. There are system logs (which have to do with the operating system), application logs (which have to do with the activity in the application, the interactions), or security logs (which log access and identity).

Companies used to have several software solutions for each separately. For metrics, companies had monitoring software like Graphite. For metrics, developers needed other software, APM or application performance monitoring. This was New Relic (NEWR), for example. And then for logs, there used to be log management software like Splunk (SPLK).

These platforms didn't talk to each other and developers or operators had to open them all separately and compare the silos manually. That didn't make sense, of course. Problems often went across borders; therefore, it makes sense to unify everything on one platform and that's exactly what Datadog did.

This allows observability teams to act much faster, especially because Datadog also provides the context of why something unexpected happens.

The solutions that companies use, are more and more complex, weaving together more applications, more multiple cloud hosting, more APIs, bigger or more teams working on separate projects simultaneously, edge cloud computing and so on. More than ever, there is a need for 'the one to rule them all' when it comes to observability, which is the fight that Datadog seems to have won.

If you look at the company's timeline, you see that initially, it only had infrastructure monitoring, so metrics. Datadog added logs and traces but other things too along the way.

Datadog timeline

Datadog's S-1

As you can see, when Datadog added the "three pillars of observability" it didn't rest on its laurels.

In 2019, it introduced RUM or real-user monitoring. It's a product that allows the Datadog customer to see the interactions of real users with their products (a site, for example, or a game). Think about how many people who have downloaded a game click on the explanation of the game before playing and which mistakes they still make, how many immediately start playing, if they can find the play button fast enough, and so on. Or think about new accounts. If there is a steep drop, Datadog will flag this and engineers can investigate this. Maybe the update had a bug that doesn't allow users to use logging in through their Apple account anymore, for example.

I'm returning to synthetics in a minute, but I first want to mention security, which is not on the roadmap above yet. As we all know, security has become much more important than just a few years ago and therefore it's important to also integrate security people into the DevOps team and make it a DevSecOps team. Datadog has already adapted for the new DevSecOps movement.

It introduced the Datadog Cloud Security Platform in August of 2021, which means that it now offers a full-stack security context on top of Datadog’s observability capabilities. Again, just like with DevOps, the company is early in what is clearly a developing trend (pun not intended) in software, the integration of security certified into the core team of DevOps. Datadog offers a unified platform for all three and security issues can be coupled to data across the infrastructure, the network and applications. It allows security teams to respond faster and gain much more granular insights after the breach has been solved.

Again, Datadog solves a real problem here. As more and more data move to the cloud, security teams often had less and less visibility, while the attacks become more and more sophisticated. That's why it's important to deliver back that visibility to these teams and deliver them a tool to implement security. Developers and operations can implement security into all levels of software, applications and infrastructure.

Datadog also added synthetics in 2019, as I already mentioned before. Synthetics are the simulation of user behavior to see if everything works as it should, even if no users are on the system yet. That was added through Datadog's acquisition of Madumbo, as we saw earlier. Pomel about synthetics:

There is an existing category for that. It’s not necessarily super interesting on its own. It tends to be a bit commoditized and it’s a little bit high churn, but it makes sense as part of a broader platform which we offer. When you have a broader platform, the churn goes away and the fact it is commoditized can actually differentiate by having everything else on the platform.

And then Pomel adds a short but very interesting sentence:

There’s a few like that, and there’s more we’re adding.

So, you shouldn't expect the expansion of Datadog to stop anytime soon.

How Datadog makes money

In short, Datadog makes money through a SaaS model, Software-as-a-Service. That means that customers have to pay monthly. But let's look at how this works in more detail.

Datadog uses a land-and-expand model. It uses a free tier that is limited in volume. Basically, you can get the infrastructure observability for free if you have less than five servers. You will have to pay if you have more servers, as it makes no sense to not add certain servers.

Datadog pricing

Datadog

(Source)

This is how Datadog defines a host:

A host is any physical or virtual OS instance that you monitor with Datadog. It could be a server, VM, node (in the case of Kubernetes) or App Service Plan instance (in the case of Azure App Services).

This is what you get in the different plans:

Datadog what you get in the different plans

Datadog

Datadog what you get in the different plans

Datadog

(Source)

It's important to know that this is just the infrastructure module. Datadog is a master in cross-selling and upselling existing customers and sells them several of these modules:

Datadog different modules

Datadog

This is another example, for APM & Continuous Profiler.

Datadog APM and Continuous Profiler pricing

Datadog

Other modules, like log management, are usage-based pricing:

Datadog log management usage based pricing

Datadog

I won't list all the pricing possibilities for all modules here. You can go to this page if you want to see them all.

Datadog's Sales Approach

The sales approach the company takes is really aimed at developers. When I hear Olivier Pomel talk about the approach, it reminds me so much of Twilio's founder and CEO Jeff Lawson's approach to sales and doing business in general, summarized in the title of his book: "Ask Your Developer." It means that the sales strategy is bottom-up: after having convinced developers, they convince their CIO or CTO and then the big contracts are made.

For large enterprises, Datadog works a bit differently, but not that much. They first talk to the CIO and they let their teams test the software (with the free tier) to get feedback. After a certain time, Datadog comes back and it often results in an order form that is being signed.

Olivier Pomel about this approach:

Small company or large company – the product is adopted the same way. The users in the end are very similar. When you’re a developer at a very large enterprise you don’t think of yourself differently as a developer at a start-up or smaller company. There’s more and more communities between those.

There are four types of sales teams in Datadog. The enterprise sales team obviously sells to large companies, the customer success sales team takes care of the onboarding and cross-selling to existing customers. The partner team works with reseller, referral partners, system integrators and other outside sellers. The inside sales team is the team that focusses on bringing in new customers.

As you may guess, there is a lot of training for the salespeople, so they stay on top of their industry. They also have to translate a customer's problems to one or several product offerings.

Affordability is important to Datadog. Founder and CEO Olivier Pomel:

In terms of pricing philosophy though, we had to be fair in what we wanted to achieve with the price. And the number one objective for us was to be deployed as widely as possible precisely so we could bring all those different streams of data and different teams together. I wanted to make sure we were in a situation where customers were not deploying us in one place and then forgetting the rest because it can’t afford it.

Pomel also gives an interesting insight into how the company decided on its pricing for the company's customers:

We looked at the overall share, what it would get, how much they would pay for their infrastructure, we decided which fraction of that we thought they could afford for us , then we divided that by the salary and infrastructure so we could actually get a price that scales.

Now the most important thing about pricing as we’ve been scaling it – and customers send us more and more data – is to make sure that customers have the control and they can align what they pay with the value of what they get.

This customer-centricity of the pricing model is an important point of differentiation.

The Earnings: What I Pay Attention To

Datadog is to report its earnings on Thursday. Very important to me is revenue growth. The consensus estimate for revenue is $381.28M, up 63.25% YoY. But Datadog has beaten the consensus in every single quarter since it became a public company.

Datadog revenue beats

Seeking Alpha Premium

The consensus for EPS (on an adjusted base) stands at $0.15 but in the previous quarter, Datadog blew away the estimates too. The consensus was $0.13 but the company brought in $0.24.

When you look at free-cash flow margins, you see that Datadog is very profitable. This is revenue and free cash flow.

Chart
DDOG Free Cash Flow data by YCharts

$335.95M on total revenue of $1.193B means there is an FCF margin of 28%. But that still improves. In the previous quarter, Q1 2022, the company had an FCF margin of almost 36%, very impressive. Especially if you look at how little the company invests in sales, compared to other high-growth companies. And SG&A (sales, general and administrative costs) continues to go down as a percentage of revenue, despite the very high revenue growth of what could be 70%.

Datadog SG&A going down

Seeking Alpha Premium

Of course, with a forward PS of 20 and a forward PE of 134, the stock is not cheap. But if you are a long-term investor, and for me that means at least three years, preferably longer, I think Datadog is a very exciting stock to own and well worth the premium, especially if you look at the high free cash flow. Let's see if the company keeps executing when it announces its the earnings on Thursday.

Some of you might wonder if they should buy before earnings. I'm not a market timer but I invest for the long term. Every two weeks, I add money to my portfolio and I often scale into positions over years. So, investing for me is a constant process, not a point in time. For me, the best situation is that Datadog has great earnings but the stock drops anyway for a small detail that doesn't really matter. In that case, I would definitely add a bit more than usual.

In the meantime, keep growing!

Mon, 01 Aug 2022 08:28:00 -0500 en text/html https://seekingalpha.com/article/4528355-datadog-what-you-need-to-know-before-the-earnings-on-thursday
Killexams : Nucleus Research Releases 2022 Database Management Technology Value Matrix

Press release content from Business Wire. The AP news staff was not involved in its creation.

MIAMI--(BUSINESS WIRE)--Jul 26, 2022--

With data stored and processed in databases expected to double by 2025, many organizations are reevaluating their backend infrastructure in order to reduce costs, extend agility, and Improve performance at scale.

“As data volumes continue to scale, we’ve found that customer needs have also shifted,” said Research Analyst Alexander Wurm. “Now customers evaluate a wider range of capabilities in competitive deals, and vendors compete to deliver elastic scalability, serverless compute, integrated data governance, and in-database analytics to capture emerging adoption.”

To address emerging customer needs, leading vendors are investing in artificial intelligence and machine learning in order to extend governance and automate administrative tasks such as performance tuning and resource provisioning.

Leaders in this year’s Value Matrix deliver advanced functionality without sacrificing ease-of-use at scale. These include AWS, MangoDB, Microsoft, Oracle Database, Oracle MySQL, and SingeStore.

The Experts in this year’s Value Matrix are organizations that deliver value to customers with complex use cases through deep functionality and industry-specific capabilities. These include Cockroach Labs, IBM, and Redis.

Facilitators in this year’s Value Matrix deliver value through greater ease of use and quick implementation. These include Couchbase, Google Cloud, and MariaDB.

Core Providers deliver core capabilities with faster and less expensive adoption. This year’s Value Matrix Core Providers are EnterpriseDB, SAP, and Scylla.

To download the full 2022 Database Management Technology Value Matrix, click here.

About Nucleus Research

Nucleus Research is the recognized global leader in ROI technology research. Using a case-based approach, we provide research streams and advisory services that allow vendors and end users to quantify and maximize the return from their technology investments. For more information, visit NucleusResearch.com or follow our latest updates on LinkedIn.

View source version on businesswire.com:https://www.businesswire.com/news/home/20220726005767/en/

CONTACT: Morgan Whitehead

Nucleus Research

mwhitehead@nucleusresearch.com

KEYWORD: FLORIDA UNITED STATES NORTH AMERICA

INDUSTRY KEYWORD: SOFTWARE DIGITAL MARKETING HARDWARE CONSULTING DATA MANAGEMENT COMMUNICATIONS PROFESSIONAL SERVICES TECHNOLOGY

SOURCE: Nucleus Research

Copyright Business Wire 2022.

PUB: 07/26/2022 10:02 AM/DISC: 07/26/2022 10:02 AM

http://www.businesswire.com/news/home/20220726005767/en

Tue, 26 Jul 2022 02:02:00 -0500 en text/html https://apnews.com/press-release/business-wire/technology-93e88f530ff74bb2bd8c12f7c53753ab
1Z0-071 exam dump and training guide direct download
Training Exams List