Python, a dynamic object-oriented programming language, has been around for quite some time. In its lifetime there have been many web frameworks to choose from (i.e. Pylons, TurboGears, CherryPy, Zope, Django, etc) making it difficult for developers to make a selection as Ian Bicking pointed out,
Recently Django has picked up steam in the world of Python and Java.
For a long, long time (longer than most of those frameworks have existed) people have complained about the proliferation of web frameworks in Python.
Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.
Developed and used over two years by a fast-moving online-news operation, Django was designed to handle two challenges: the intensive deadlines of a newsroom and the stringent requirements of the experienced Web developers who wrote it. It lets you build high-performing, elegant Web applications quickly.
Django focuses on automating as much as possible and adhering to the DRY principle.
On the JVM side there exists Jython,
It took nearly 6 years for Jython to go from version 2.1 to 2.2, but in just the last few months Jython has gone through two release candidates and another stable release, currently at 2.2.1. The Jython developers are working hard on producing Jython 2.5 which aims to align Jython with CPython 2.5 and provide a much cleaner and consistent code base.
An implementation of the high-level, dynamic, object-oriented language Python written in 100% Pure Java, and seamlessly integrated with the Java platform. It thus allows you to run Python on any Java platform.
A side goal of Jython 2.5 is to try to get some CPython frameworks working, especially the web frameworks, for example:
InfoQ recently had the opportunity to interview Jim Baker, a python evangelist and contributor to Django on Jython (DoJ), to find out what is expected and when.
What is the expected release date for Django on Jython?
What version of Jython is going to be required?
This year. It's predicated on the next release of Jython. See #2 for that planning. Django in contrast just works, with only minor changes, thanks to a lot of work that many other people did in identifying (minor) Jython incompatibilities. Now most of the problems we have identified actually occur only in testing, where Django makes certain assumptions about Python should run that don't apply to Jython. An example of such an assumption that the hash algorithm is the same for dictionary implementations; because we use Java's (ConcurrentHashMap), this is not the case. However, that's an artifact of the testing process, Django doesn't really care about that. Still, we plan to certify this by passing all the tests (fixed as necessary).
With the release, will there be a simpler install? Currently your blog post suggests applying patches, copying files, etc.
Jython 2.5 - equivalent to CPython 2.5 (or what is conventionally called Python!) is what we are targeting in the Jython project. This is actually moving very fast. We have a 2.5 compiler that's available for experimental usage, but it's getting more and more robust as we have additional people testing it. This "newcompiler" was initiated by a Google Summer of Code project that I mentored. (Bruce Eckel mentioned this in ablog post. We in fact hit that mid-September date!) At the Python Conference in mid-March, we will be setting the specific target based on where we are. Tobias and I will also be presenting our paper "A New Compiler for Jython" at PyCon.
Is Django trying to be what Rails is for Ruby and Grails is becoming for Groovy?
There will be a simple install! My reporting was simply to show how close we were in fact to this goal. I also anticipate plugin support in major IDEs like Eclipse or Netbeans, although this will come later.
Jim also expressed the usefulness of having a preconfigured stack available, to ease the experimentation of using Django on Jython (DoJ),
Django offers comparable functionality to those web app frameworks, with ostensibly a more robust platform. So Django is written to be multithreaded, unlike RoR, which means we don't have to go through a lot of tricks to make it work on the Java platform, such as using multiple classloaders. We currently have database support for PostgreSQL, with some work done also on MySQL. I helped write the Oracle backend for Django. We're also planning to support Java DB (Derby).
I'd like to see the following preconfigured stack available for Django on Jython (DoJ): Derby + Tomcat. This should be something that a developer can just access via a plugin from Eclipse or Netbeans or IDE of their choice, which means they can configure Derby and Tomcat directly from the IDE. It also provides an obvious migration path to other containers and databases. Perhaps more importantly, such a setup allows for easy DoJ experimentation, whether that's for someone building a Django app, or also using tuple spaces, rules engines, PDF tools, or other parts of the heavy-lifting infrastructure available on the Java platform. This is where I think DoJ provides true compelling value.
For additional information try the following links:
This week's Java roundup for July 18th, 2022, features news from Oracle, JDK 18, JDK 19, JDK 20, Spring Boot and Spring Security milestone and point releases, Spring for GraphQL 1.0.1, Liberica JDK updates, Quarkus 2.10.3, CVE in Grails, JobRunr 5.1.6, JReleaser maintenance, Apache Tomcat 9.0.65 and 10.1.0-M17, Tornado VM on Apple M1 and the JBNC conference.
As part of Oracle's Critical Patch Update for July 2022, Oracle has released versions 126.96.36.199, 188.8.131.52, 184.108.40.206, 8u333 and 7u343 of Oracle Java SE. More details may be found in the release notes for JDK 18, JDK 17, JDK 11, JDK 8 and JDK 7.
Concurrent with Oracle's Critical Patch Update, JDK 18.0.2 has been released with minor updates and removal of the alternate
ThreadLocal class implementation of the
callAs() methods within the
Subject class. However, support for the default implementation has been maintained. Further details on this release may be found in the release notes.
As per the JDK 19 release schedule, Mark Reinhold, chief architect, Java Platform Group at Oracle, formally declared that JDK 19 has entered Rampdown Phase Two to signal continued stabilization for the GA release in September. Critical bugs, such as regressions or serious functionality issues, may be addressed, but must be approved via the Fix-Request process.
The final set of seven (7) features for JDK 19 release will include:
Build 32 of the JDK 19 early-access builds was made available this past week, featuring updates from Build 31 that include fixes to various issues. More details may be found in the release notes.
Build 7 of the JDK 20 early-access builds was also made available this past week, featuring updates from Build 6 that include fixes to various issues. Release notes are not yet available.
For JDK 19 and JDK 20, developers are encouraged to report bugs via the Java Bug Database.
Spring Boot 2.7.2 has been released featuring bug fixes, improvements in documentation and dependency upgrades such as: Spring Framework 5.3.22, Spring Data 2021.2.2, Spring GraphQL 1.0.1, Tomcat 9.0.65, Micrometer 1.9.2, Reactor 2020.0.21 and MariaDB 3.0.6. Further details on this release may be found in the release notes.
Spring Boot 2.6.10 has been released featuring bug fixes, improvements in documentation and dependency upgrades such as: Spring Framework 5.3.22, Spring Data 2021.1.6, Jetty Reactive HTTPClient 1.1.12, Hibernate 5.6.10.Final, Micrometer 1.8.8, Netty 4.1.79.Final and Reactor 2020.0.21. More details on this release may be found in the release notes.
On the road to Spring Boot 3.0, the fourth milestone release has been made available to provide support for: the new Java Client in Elasticsearch; Flyway 9; and Hibernate 6.1. Further details on this release may be found in the release notes.
Spring Security 5.8.0-M1 and 6.0.0-M6 have been released featuring: a new
setDeferredContext() method in the
SecurityContextHolder class to support lazy access to a
SecurityContext lookup; support for the
SecurityContextHolderStrategy interface to eliminate race conditions when there are multiple application contexts; support for the
AuthorizationManager interface to delay a lookup up of the
Authentication (such as
Supplier<Authentication>) vs a direct
Authentication lookup; and provide an alternative for MD5 hashing in the
Remember-Me token. There were numerous breaking changes in version 6.0.0-M6. More details on these releases may be found in the release notes for version 5.8.0-M1 and version 6.0.0-M6.
Spring for GraphQL 1.0.1 has been released featuring: improved handling when a source/parent is expected and is
null; support for resolving exceptions from a GraphQL subscription; and a new default limit on the
DEFAULT_AUTO_GROW_COLLECTION_LIMIT field within the
DataBinder class. This version also ships with Spring Boot 2.7.2 and a dependency upgrade to GraphQL Java 18.2. Further details on this release may be found in the release notes.
Also concurrent with Oracle's Critical Patch Update (CPU) for July 2022, BellSoft has released CPU patches for versions 220.127.116.11.1, 18.104.22.168.1 and 8u341 of Liberica JDK, their downstream distribution of OpenJDK. In addition, Patch Set Update (PSU) versions 18.0.2, 17.0.4, 11.0.16 and 8u342, containing CPU and non-critical fixes, have also been released.
Quarkus 2.10.3.Final has been released to address CVE-2022-2466, a vulnerability discovered in the SmallRye GraphQL server extension in which server requests were not properly terminated. This vulnerability only affects the 2.10.x release train. Developers are encouraged to upgrade to this latest release. More details on this release may be found in the release notes.
The Micronaut Foundation has identified a remote code execution vulnerability in the Grails Framework that has been documented as CVE-2022-35912. This allows an attacker to "remotely execute malicious code within a Grails application runtime by issuing a specially crafted web request that grants the attacker access to the class loader."
This attack exploits a portion of data binding capability within Grails. Versions 5.2.1, 5.1.9, 4.1.1 and 3.3.15 have been patched to protect against this vulnerability.
Ronald Dehuysser, founder and primary developer of JobRunr, a utility to perform background processing in Java, has released version 5.1.6 with support for Micrometer Metrics that now exposes recurring jobs and number of background job servers.
An early-access release of JReleaser, a Java utility that streamlines creating project releases, has been made available featuring a fix to an issue in Gradle where a property wasn't properly checked before accessing it.
The Apache Software Foundation has provided milestone and point releases for Apache Tomcat.
Tomcat 9.0.65 available features: a fix for CVE-2022-34305, a low severity XSS vulnerability in the Form authentication example; support for repeatable builds; and an update of the packaged version of the Tomcat Native Library to 1.2.35 that includes Windows binaries built with OpenSSL 1.1.1q. Further details on this release may be found in the changelog.
Apache Tomcat 10.1.0-M17 (beta) available features: an update of the packaged version of the Tomcat Native Library to 2.0.1 that includes Windows binaries built with OpenSSL 3.0.5; support for repeatable builds; and an update of the experimental Panama modules with support for OpenSSL 3.0+. Apache Tomcat 10.1.0-M17 is an alpha milestone release to provide developers with early access to the new features in Apache Tomcat 10.1 release train. More details on this release may be found in the changelog.
TornadoVM, an open-source software technology company, has announced that developers may still install TornadoVM on the Apple M1 architecture despite Apple having deprecated OpenCL.
JBCNConf 2022 was held at the International Barcelona Convention Center in Barcelona, Spain, this past week featuring many speakers from the Java community who presented talks and workshops.
This news story has been updated to include the definition of the acronym PSU (Patch Set Update) in the Liberica JDK section.
New Jersey, United States – Cybersecurity Market 2022 – 2028, Size, Share, and Trends Analysis Research Report Segmented with Type, Component, Application, Growth Rate, Region, and Forecast | key companies profiled -IBM (US), Cisco (US), Check Point (Israel), and others.
The development of the Cybersecurity Market can be ascribed to the developing complexity of digital assaults. The recurrence and power of digital tricks and violations have expanded over the course of the past 10 years, bringing about gigantic misfortunes for organizations. As cybercrimes have expanded essentially, organizations overall have directed their spending security advances to reinforce their in-house security foundations. Designated assaults have seen an ascent lately, invading targets’ organization framework and all the while keeping up with secrecy. Aggressors that have a particular objective as a top priority generally assault endpoints, organizations, on-premises gadgets, cloud-based applications, information, and different other IT frameworks. The essential thought process behind designated assaults is to interfere with designated organizations or associations’ organizations and take basic data. Because of these designated assaults, business-basic tasks in associations are adversely affected by business disturbances, protected innovation misfortune, monetary misfortune, and loss of basic and touchy client data. The effect of designated digital assaults influences designated associations as well as homegrown and worldwide clients.
According to our latest report, the Cybersecurity market, which was valued at US$ million in 2022, is expected to grow at a CAGR of approximate percent over the forecast period.
Receive the sample Report of Cybersecurity Market Research Insights 2022 to 2028 @ https://www.infinitybusinessinsights.com/request_sample.php?id=849932
Cybersecurity Market necessities develop at a higher rate than spending plans intended to address them. The majority of the little firms come up short on a financial plan and IT security mastery to take on improved network protection answers to defend their organizations and IT foundations from different digital assaults. The restricted capital subsidizing can be a significant controlling component for a few little and medium-sized organizations embracing the online protection model. Emerging companies in emerging nations across MEA, Latin America, and APAC frequently face a test to secure money and suitable subsidizing to embrace network protection answers for their business. The capital financing in these organizations is significantly procured for defending business-basic activities, now and again leaving less or no subsidizing for improving high-level network protection arrangements. Besides, network safety financial plans in the arising new companies are lacking to execute Next-Generation Firewalls (NGFWs) and Advanced Threat Protection (ATP) arrangements.
The distributed computing model is generally embraced because of its strong and adaptable framework. Numerous associations are moving their inclination toward cloud answers for improving on the capacity of information, and furthermore, as it gives far off server access on the web, empowering admittance to limitless registering power. The execution of a cloud-based model empowers associations to deal with every one of the applications as it gives a particular testing examination that runs behind the scenes. The execution of cloud can permit associations to join valuable Cybersecurity Market advancements, for example, programming characterized edges, to make vigorous and exceptionally secure stages. States in numerous nations issue extraordinary rules and guidelines for cloud stage security, which drives the Cybersecurity Market development across the globe. SMEs are continually looking to modernize their applications and foundations by moving to cloud-based stages, like Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS).
Based on components, the cybersecurity market is segmented into hardware, software, and services. Cybersecurity technology is offered by various vendors as an integrated platform or a tool that integrates with enterprises’ existing infrastructure. Vendors also offer cybersecurity hardware associated with services that help organizations in implementing the required solution in their current infrastructure. In accurate years, several developments have been witnessed in cybersecurity software and related hardware development kits.
Cybersecurity services are classified into professional and managed services. Professional services are further segmented into consulting, risk, and threat assessment; design and implementation; training and education; and support and maintenance. The demand for services is directly related to the adoption level of cybersecurity solutions. The adoption of cybersecurity solutions is increasing for securing business-sensitive applications.
Access the Premium Cybersecurity market research report 2022 with a full index.
North America, being a technologically advanced region, tops the world in terms of the presence of security vendors and cyber incidents. As the world is moving toward interconnections and digitalization, protecting enterprise-critical infrastructures and sensitive data have become one of the major challenges. North America is an early adopter of cybersecurity solutions and services across the globe. In North America, the US is expected to hold a larger market share in terms of revenue. The increasing instances of cyber-attacks are identified as the most crucial economic and national security challenges by governments in the region.
Businesses in this region top the world in terms of the adoption of advanced technologies and infrastructures, such as cloud computing, big data analytics, and IoT. Attacks are increasing dramatically and becoming more sophisticated in nature and targeting business applications in various industry verticals. Sophisticated cyber attacks include DDoS, ransomware, bot attacks, malware, zero-day attacks, and spear phishing attacks.
The infrastructure protection segment accounted for the largest revenue share in 2022, of the overall revenue. The high market share is attributed to the rising number of data center constructions and the adoption of connected and IoT devices. Further, different programs introduced by governments across some regions, such as the Critical Infrastructure Protection Program in the U.S. and the European Programme for Critical Infrastructure Protection (EPCIP), are expected to contribute to market growth. For instance, the National Critical Infrastructure Prioritization Program (NIPP), created by the Cybersecurity and Infrastructure Security Agency (CISA), helps in identifying the list of assets and systems vulnerable to cyber-attacks across various industries, including energy, manufacturing, transportation, oil & gas, chemicals, and others, which is damaged or destroyed would lead to national catastrophic effects.
Major vendors in the global cybersecurity market include IBM (US), Cisco (US), Check Point (Israel), FireEye (US), Trend Micro (Japan), NortonLifeLock (US), Rapid7 (US), Micro Focus (UK), Microsoft (US), Amazon Web Services (US), Oracle (US), Fortinet (US), Palo Alto Networks (US), Accenture (Ireland), McAfee (US), RSA Security (US), Forcepoint (US), Sophos PLC (UK), Imperva (US), Proofpoint (US), Juniper Network (US), Splunk (US), SonicWall (US), CyberArk (US), F-secure (Finland), Qualys (US), F5 (US), AlgoSec (US), SentinelOne (US), DataVisor (US), RevBits (US), Wi-Jungle (India), BluVector (US), Aristi Labs (India) and Securden (US).
The following are some of the reasons why you should Buy a Cybersecurity market report:
Click here to download the full index of the Cybersecurity market research report 2022
International: +1 518 300 3575
Email: [email protected]
An abuse survivor can sue Visa over videos of her posted to Pornhub, a US court has ruled.
Serena Fleites was 13 in 2014 when, it is alleged, a boyfriend pressured her into making an explicit video which he posted to Pornhub.
Ms Fleites alleges that Visa, by processing revenue from ads, conspired with Pornhub's parent firm MindGeek to make money from videos of her abuse.
Visa had sought to be removed from the case.
Ms Fleites' story has featured in the New York Times article The Children of Pornhub - an article which prompted MindGeek to delete millions of videos and make significant changes to its policies and practice.
Her allegations are summarised in the pre-trial ruling of the Central District Court of California.
The initial explicit video, posted to Pornhub without her knowledge or consent, had 400,000 views by the time she discovered it, Ms Fleites says.
She alleges that after becoming aware of the video, she contacted Mindgeek pretending to be her mother "to inform it that the video qualified as child pornography". A few weeks later it was removed
But the video was downloaded by users and re-uploaded several times, with one of the re-uploads viewed 2.7 million times, she argues.
MindGeek earned advertisement revenue from these re-uploads, it is alleged.
Ms Fleites says her life had "spiralled out of control" - there were several failed suicide attempts and family relationships deteriorated - then while living at a friend's house, an older man introduced her to heroin.
To fund her addiction, while still a child, she created further explicit videos at this man's behest, some of which were uploaded to Pornhub.
"While MindGeek profited from the child porn featuring Plaintiff, Plaintiff was intermittently homeless or living in her car, addicted to heroin, depressed and suicidal, and without the support of her family," Judge Cormac J. Carney's summary of her allegations says.
MindGeek told the BBC that at this point in the case, the court has not yet ruled on the truth of the allegations, and is required to assume all of the plaintiff's allegations are true and accurate.
"When the court can actually consider the facts, we are confident the plaintiff's claims will be dismissed for lack of merit," the company said.
The Judge ruled that, at the current stage of proceedings, "the Court can infer a strong possibility that Visa's network was involved in at least some advertisement transactions relating directly to Plaintiff's videos".
But Visa argued that the "allegation that Visa recognized MindGeek as an authorized merchant and processed payment to its websites does not suggest that Visa agreed to participate in sex trafficking of any kind".
It also argued, according to the judge's account of its position, that a commercial relationship alone does not establish a conspiracy.
But Judge Carney said that, again at this stage of proceedings, "the Court can comfortably infer that Visa intended to help MindGeek monetize child porn from the very fact that Visa continued to provide MindGeek the means to do so and knew MindGeek was indeed doing so.
"Put yet another way, Visa is not alleged to have simply created an incentive to commit a crime, it is alleged to have knowingly provided the tool used to complete a crime".
A spokesperson for Visa told the BBC that it condemned sex trafficking, sexual exploitation and child sexual abuse material.
"This pre-trial ruling is disappointing and mischaracterizes Visa's role and its policies and practices. Visa will not tolerate the use of our network for illegal activity. We continue to believe that Visa is an improper defendant in this case."
Last month MindGeek's chief executive officer and chief operating officer resigned.
The senior departures followed further negative press in an article in the magazine the New Yorker, examining among other things the company's moderation policies.
Mindgeek told the BBC that it has:
zero tolerance for the posting of illegal content on its platforms
banned uploads from anyone who has not submitted government-issued ID that passes third-party verification
eliminated the ability to download free content
integrated several technological platform and content moderation tools
instituted digital fingerprinting of all videos found to be in violation of our Non-Consensual Content and CSAM Policies to help protect against removed videos being reposted
expanded its moderation workforce and processes
The company also said that any insinuation that it does not take the elimination of illegal material seriously is "categorically false".
Another day, another vulnerability. Discovered by [Kevin Backhouse], CVE-2018-4407 is a particularly serious problem because it is present all throughout Apple’s product line, from the Macbook to the Apple Watch. The flaw is in the XNU kernel shared by all of these products.
This is a buffer overflow issue in the error handling for network packets. The kernel is expecting a fixed length of those packets but doesn’t check to prevent writing past the end of the buffer. The fact Apple’s XNU kernel powers all their products is remarkable, but issues like this are a reminder of the potential downside to that approach. Thanks to responsible disclosure, a patch was pushed out in September.
Buffer overflows aren’t new, but a reminder on what exactly is going on might be in order. In low level languages like C, the software designer is responsible for managing computer memory manually. They allocate memory, tagging a certain number of bytes for a given use. A buffer overflow is when the program writes more bytes into the memory location than are allocated, writing past the intended limit into parts of memory that are likely being used for a different purpose. In short, this overflow is written into memory that can contain other data or even executable code.
With a buffer overflow vulnerability, an attacker can write whatever code they wish to that out-of-bounds memory space, then manipulate the program to jump into that newly written code. This is referred to as arbitrary code execution. [Computerphile] has a great walk-through on buffer overflows and how they lead to code execution.
[Kevin] took the time to explain the issue he found in further depth. The vulnerability stems from the kernel code making an assumption about incoming packets. ICMP error messages are sent automatically in response to various network events. We’re probably most familiar with the “connection refused’ message, indicating a port closed by the firewall. These ICMP packets include the IP header of the packet that triggered the error. The XNU implementation of this process makes the assumption that the incoming packet will always have a header of the correct length, and copies that header into a buffer without first checking the length. A specially crafted packet can have a longer header, and this is the data that overflows the buffer.
Because of the role ICMP plays in communicating network status, a closed firewall isn’t enough to mitigate the attack. Even when sent to a closed port, the vulnerability can still trigger. Aside from updating to a patched OS release, the only mitigation is to run the macOS firewall in what it calls “stealth mode”. This mode doesn’t respond to pings, and more importantly, silently drops packets rather than sending ICMP error responses. This mitigation isn’t possible for watchOS and iOS devices.
The good news about the vulnerability is that a packet, malformed in this way, has little chance of being passed through a router at all. An attacker must be on the same physical network in order to send the malicious packet. The most likely attack vector, then, is the public WiFi at the local coffee shop.
Come back after the break for a demonstration of this attack in action.
So far, the vulnerability is only known to crash machines, as seen above. Because of the nature of the problem, it’s likely that this vulnerability will eventually be turned into a full code execution exploit. [Kevin] informed Apple of the issue privately, and they fixed the issue in September updates of macOS and iOS.
ConfigOS Identifies GPO Compliance Defects
ASHBURN, Va., July 13, 2022 /PRNewswire/ -- SteelCloud LLC, a STIG and CIS compliance automation software developer, announced today that the USPTO has awarded it patent 11,368,366 for "Group Policy Object update compliance and synchronization."
This patent covers functionality delivered in SteelCloud's ConfigOS compliance software suite. SteelCloud's remediation automation identifies every control on every endpoint where the customer's implementation of Microsoft's Active Directory Group Policy Objects (GPOs) enforces non-compliant policies. In addition, GPO synchronization output includes reports, archives, and logs that can integrate with other applications such as Splunk, Xacta, and eMASS.
"Our experience has proven that it is virtually impossible for large organizations to manage compliance by using GPOs exclusively," said Brian Hajost, SteelCloud Chief Operating Officer. "Our patented software identifies GPO defects that take endpoints out of compliance. ConfigOS automates GPO conflict synchronization allowing our customers to effectively flatten and simplify their implementation of Microsoft's Active Directory while removing a significant barrier in maintaining STIG or CIS compliance."
GPO conflict synchronization is available today, at no additional charge, to all ConfigOS customers.
SteelCloud's ConfigOS software is currently implemented in hundreds of commercial and government organizations. Use cases for ConfigOS range from business, cloud, OT/SCADA, and weapon systems. ConfigOS scans and remediates hundreds of system-level controls in minutes. Automated remediation rollback, as well as comprehensive compliance reporting and SIEM dashboard integration, are provided. ConfigOS was designed to harden hundreds of system-level controls around an application stack in about 60 minutes - typically eliminating weeks or months from the RMF accreditation timeline. ConfigOS addresses Microsoft Windows workstation and server operating systems, SQL Server, IIS, IE, Chrome, and all of the Microsoft Office components. The same instance of ConfigOS addresses CISCO network devices, Apache, Red Hat Enterprise 5/6/7/8, SUSE, CENTOS, Ubuntu, and Oracle Linux. Learn more at https://www.steelcloud.com/configos-cybersecurity/.
SteelCloud develops STIG and CIS compliance software for government and commercial customers. Our products automate policy and security remediation by reducing the complexity, effort, and expense of meeting government security mandates. SteelCloud has delivered security policy-compliant solutions to enterprises worldwide, simplifying implementation and ongoing security and compliance support. SteelCloud products are easy to license through our GSA Schedule 70 contract. SteelCloud can be reached at (703) 674–5500 or firstname.lastname@example.org. Additional information is available at www.steelcloud.com, or contact Jamie Coffey at email@example.com.
View original content:https://www.prnewswire.com/news-releases/steelcloud-extends-microsoft-active-directory-compliance-with-new-patent-301585231.html
SOURCE SteelCloud LLC
Today in Tech
Last week we saw the announcement of the new Raspberry Pi Zero 2 W, which is basically an improved quad-core version of the Pi Zero — more comparable in speed to the Pi 3B+, but in the smaller Zero form factor. One remarkable aspect of the board is the Raspberry-designed RP3A0 system-in-package, which includes the four CPUs and 512 MB of RAM all on the same chip. While 512 MB of memory is not extravagant by today’s standards, it’s workable. But this custom chip has a secret: it lets the board run on reasonably low power.
When you’re using a Pi Zero, odds are that you’re making a small project, and maybe even one that’s going to run on batteries. The old Pi Zero was great for these self-contained, probably headless, embedded projects: sipping the milliamps slowly. But the cost was significantly slower computation than its bigger brothers. That’s the gap that the Pi Zero 2 W is trying to fill. Can it pull this trick off? Can it run faster, without burning up the batteries? Raspberry Pi sent Hackaday a review unit that I’ve been running through the paces all weekend. We’ll see some benchmarks, measure the power consumption, and find out how the new board does.
The answer turns out to be a qualified “yes”. If you look at mixed CPU-and-memory tasks, the extra efficiency of the RP3A0 lets the Pi Zero 2 W run faster per watt than any of the other Raspberry boards we tested. Most of the time, it runs almost like a Raspberry Pi 3B+, but uses significantly less power.
Along the way, we found some interesting patterns in Raspberry Pi power usage. Indeed, the clickbait title for this article could be “We Soldered a Resistor Inline with Raspberry Pis, and You Won’t Believe What Happened Next”, only that wouldn’t really be clickbait. How many milliamps do you think a Raspberry Pi 4B draws, when it’s shut down? You’re not going to believe it.
When it comes to picking a tiny Linux computer to embed in your project, you’ve got a lot more choice today than you did a few years ago. Even if you plan to stay within the comfortable world of the Raspberry Pi computers, you’re looking at the older Pi 3B+, the tiny Pi Zero, the powerhouse Pi 4B in a variety of configurations, and as of last week, the Pi Zero 2 W.
I ran all of the Raspberries through two fairly standard torture tests, all the while connected to a power supply with a 0.100 Ω precision resistor inline, and recorded the voltage drop across the resistor, and thus the current that the computers were drawing. The values here are averaged across 50 seconds by my oscilloscope, which accurately accounts for short spikes in current, while providing a good long-run average. All of the Pis were run headless, connected via WiFi and SSH, with no other wires going in or out other than the USB power. These are therefore minimum figures for WiFi-using Pi — if you run USB peripherals, don’t forget to factor them into your power budget.
Test number one is
stress-ng which simply hammers all of the available CPU cores with matrix inversion problems. This is great for heat-stressing computers, but also for testing out their maximum CPU-driven power draw. All of the Pis here have four cores except for the original Pi Zero, which has only one. What you can see here is that as you move up in CPU capability, you burn more electrons. The Pi Zero 2 has four cores, but runs at a stock 1 GHz, while the 3B+ runs at 1.4 GHz and the 4B at 1.5 GHz. More computing, more power.
Test number two is sbc-bench which includes a memory bandwidth test (tinymemtest), a mixed-use CPU benchmark (7-zip), and a test of cryptographic acceleration (OpenSSL). Unfortunately, none of the Raspberry Pis use hardware cryptographic acceleration, so the OpenSSL test ends up being almost identical to the 7-zip test — a test of mixed CPU and memory power — and I’m skipping the results here to save space.
For ease of interpretation, I’m using the sum of the two memory sub-tests as the result for TinyMemBench, and the 7-zip test results are an average of the three runs. For all of these, higher numbers are better: memory written faster and more files zipped. This is where things get interesting.
Looking first at the memory bandwidth scores, the 4B is way out ahead, and the old Pi Zero is bringing up the rear, but the 3B+ and the Zero 2 are basically neck-in-neck. What’s interesting, however, is the power used in the memory test. The Zero 2 W scores significantly better than the 3B+ and the 4B. It’s simply more efficient, although if you divide through to get memory bandwidth per watt of power, the old Pi Zero stands out.
Turn then to the 7-zip test, a proxy for general purpose computing. Here again, the four-core Pis all dramatically outperform the pokey Pi Zero. The Pi 4 is the fastest by far, and with proper cooling it can be pushed to ridiculous performance. But as any of you who’ve worked with Raspberry Pis and batteries know, the larger form-factor Raspberry Pi computers consume a lot more power to get the job done.
But look at the gap between the Pi Zero 2’s performance and the Pi 3B+. They’re very close! And look at the same gap in terms of power used — it’s huge. This right here is the Pi Zero 2’s greatest selling point. Almost 3B+ computational performance while using only marginally more power than the old Pi Zero. If you divide these two results to get a measure of zipped files per watt, which I’m calling computational “grunt” per watt, the Zero 2 is far ahead.
If you’re looking for a replacement for a slow Raspberry Pi Zero in some portable project, it really looks like the Pi Zero 2 fits the bill perfectly.
Some projects only need to do a little bit of work, and then can shut down or slow down during times of inactivity to use less total power over the course of a day. With an eye toward power saving, I had a look at how all of the boards performed when they weren’t doing anything, and here one of the answers was very surprising.
Unless you’re crunching serious numbers or running a busy web server on your Raspberry Pi, chances are that it will be sitting idle most of the time, and that its idle current draw will actually dominate the total power consumption. Here, we can see that the Pi Zero 2 has a lot more in common with the old Pi Zero than with the other two boards. Doing nothing more than keeping WiFi running, the Zeros use less than a third of the power consumed by their bigger siblings. That’s a big deal.
I also wanted to investigate what would happen if you could turn WiFi off, or shut the system down entirely, analogous to power-saving tricks that we use with smaller microcontrollers all the time. To test this, I ran a routine from an idle state that shut the WiFi off, waited 10 seconds, and then shut the system down. I was surprised by two things. One, the power consumed by WiFi in standby isn’t really that significant — you can see it activating periodically during the idle phase.
Second, the current draw of a shut-down system varied dramatically across the boards. I’m calling this current “zombie current” because this is the current drawn by the board when the CPU brain is shut off entirely. To be absolutely certain that I was measuring zombie current correctly, I unplugged the boards about ten seconds after shutdown. These are the traces that you see here, plotted for each system. There are four phases: idle, idle with no WiFi, shut down / zombie, and finally physically pulling the plug.
The Pi 4 draws around 240 mA when it is shut down, or 1.2 W! The Pi 3 draws around 90 mA, or 0.45 W. For comparison, the Pi Zero 2’s idle current is similar to the Pi 3’s zombie current. The Pi Zero 2 has a much-closer-to-negligible 45 mA zombie draw, and the original Pi Zero pulls even less.
The point here is that while it’s not surprising that the power required to idle would increase for the more powerful CPUs, the extent of both the variation in idle and zombie current really dictates which boards to use in a battery powered project. Watch out!
In that respect, with the processing power of the Pi 3B, significantly better power management all around, and coming in at half the price, the Raspberry Pi Zero 2 W is incredibly attractive for anything that needs to sip the juice but also needs to pack some punch. The old Pi Zero shined in small, headless projects, and it was the only real choice for battery-driven projects. The Pi Zero 2 definitely looks like a worthy successor, adding a lot more CPU power for not all that much electrical power.
Still, I don’t think that the Pi Zero 2 will replace the 3B+, its closest competitor, for the simple reason that the Pi 3 has more memory and much more versatile connectivity straight out of the box. If your project involves more than a few USB devices, or wired Ethernet, or “normal” HDMI connections, adding all of these extra parts can make a Zero-based setup almost as bulky as a B. And when it comes down to pure grunt, power-budget be damned, the Pi 4 is clearly still the winner.
But by combining four cores tightly with on-chip memory, the Raspberry Pi Zero 2 W is definitely the most energy-efficient Pi.
Online commerce led by large companies like Amazon, SnapDeal, Jio, and Flipkart has posed existential challenges to small retailers. Many of them that lacked the wherewithal to expand their online presence, especially during Covid-19, had to face shrinking sales or had to fold up.
In this context, the pioneering initiative of the government – the Open Network for Digital Commerce (ONDC) aimed at democratising digital commerce, moving away from a platform-centric approach to building an open network, has the potential to dramatically transform e-commerce in India. It would drive retailers to give up functioning in silos and encourage all providers of products and services to join the network, regardless of the platforms they transact on, thus becoming a conduit for innovation and expansion of the e-commerce base.
The implementation of this network would be similar to that of UPI, which has been a major success. The crux of the new initiative is to bring in transparency and make discovery of information that is being created with open network protocols. It would create a level-playing field for all stakeholders and build an inclusive ecosystem for digital commerce.
According to an e-marketer report of December 2021, India holds the fourth place in Asia-Pacific in terms of retail e-commerce sales, after China, Japan and South Korea. India’s online retail market is expected to grow at a CAGR of 19.8% to reach $85.5 billion by 2025, according to a report by Forrester. In 2020, online accounted for 3.6% of India’s total retail sales, with this projected to rise to 6% by 2025.
As per a Redseer report, the number of online buyers from tier-2 and smaller cities will rise from about 78 mn in 2021 to nearly 256 mn by 2026. Hence, the ONDC initiative will be a boon for small retailers who would be servicing customers from small towns in particular. For users, the biggest benefit would be that a comparison of all the available products as well as order placement could be done from the network without having to shift platforms. The ONDC initiative would also help the large players as their customer base would widen and they would be able to access competitive logistics support coverage on the network, thus resulting in speedy deliveries and lower costs for customers.
The ONDC programme aims to bring together 30 mn sellers and 10 mn merchants online, and cover at least 100 cities and towns by August. Critics have pointed out that unlike UPI which is an entirely digital process, ONDC would facilitate a buyer-seller match, which could lead to disputes. Also, the reliability of the seller and the quality of products cannot be guaranteed by the network.
According to equities research firm Jefferies, the entire workflow in the case of UPI is virtual and operates within a controlled environment, whereas in the case of ONDC processes would be stretched over online and offline modes and expectations from offline activities could be subjective and lead to dissatisfaction. Just as UPI offered incremental convenience as a unique value proposition, buyers need to experience a compelling reason to transact through ONDC. If ONDC can repeat the success of UPI, India would be heralding a new phase of e-commerce, an example which other countries may want to emulate eventually.
The writer is chairperson, Global Talent Track, a corporate training solutions firm