Enterprise IT architect certifications appear most often at the apex of certification programs, where less than 1% of IT professionals ultimately ascend. Even so, many IT architect certifications are available, and you don’t need to invest in one certification sponsor’s vision to reach the top.
Many IT certifications in this area fall outside vendor umbrellas, which means they are vendor-neutral or vendor-agnostic. Nevertheless, the number of vendor-specific IT certifications exceeds vendor-neutral ones by a factor of more than 2 to 1. That’s why we devote the last section of this article to all such credentials, as we encountered them in search of the best enterprise architect certifications.
For IT pros who’ve already invested in vendor-specific certification programs, credentials at the architect level may indeed be worth pursuing. Enterprise architects are among the highest-paid employees and consultants in the tech industry.
Enterprise architects are technical experts who are able to analyze and assess organizational needs, make recommendations regarding technology changes, and design and implement those changes across the organization.
The national average salary per SimplyHired is $130,150, in a range from $91,400 to a whopping $185,330. Glassdoor reports $133,433 as the average. Ultimately, the value of any IT certification depends on how long the individual has worked and in what part of the IT patch.
Becoming an enterprise architect is not easy. While the requirements may vary by employer, most enterprise architects have a bachelor’s degree or higher in a computer-related field along with 5-10 years of professional work experience. Many enterprise architects obtain additional certifications past graduation.
Certifications are a great way to demonstrate to prospective employers that you have the experience and technical skills necessary to do the job and deliver you a competitive edge in the hiring process. Certification holders also frequently earn more than their uncertified counterparts, making certifications a valuable career-building tool.
Below, you’ll find our top five certification picks. Before you peruse our best picks, check out the results of our informal job board survey. Data indicates the number of job posts in which our featured certifications were mentioned on a given day. The data should deliver you an idea of the relative popularity of each of these certifications.
|AWS Certified Solution Architect (Amazon Web Services)||1,035||464||2,672||240||4,411|
|ITIL Master (Axelos)||641||848||1,218||1,119||3,826|
|TOGAF 9 (The Open Group)||443||730||271||358||1,802|
|Zachman Certified – Enterprise Architect (Zachman)||86||107||631||252||1,076|
Making its first appearance on the leaderboard is the Certified Solutions Architect credential from Amazon Web Services (AWS). AWS, an Amazon subsidiary, is the global leader in on-demand cloud computing. AWS offers numerous products and services to support its customers, including the popular Amazon Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2). AWS also offers numerous cloud applications and developer tools, including Amazon Comprehend, Amazon SageMaker Batch Transform and Amazon Lightsail.
AWS offers certifications at the foundation, associate and professional levels across five role-based categories: architect, developer, operations, cloud and specialty certifications. Foundation-level certifications validate a candidate’s understanding of the AWS Cloud and serve as a prerequisite to AWS specialty certifications. Foundation certifications are a recommended starting place for those seeking higher-level credentials.
Associate credentials typically have no prerequisites and focus on technical skills. They are required to obtain professional-level certifications, which are the highest level of technical certification available. Specialty certs, meanwhile, focus on skills in targeted areas.
AWS currently offers the following credentials:
The AWS Certified Solutions Architect credential is available at the associate and professional levels. The associate credential targets candidates with at least one year of experience architecting and implementing solutions based on AWS applications and technologies. AWS updated the associate-level exam in February 2018 to include architecture best practices and new services.
The AWS Certified Solutions Architect – Professional certification targets senior AWS architects who can architect, design, implement and manage complex enterprise-level AWS solutions based on defined organizational requirements. Candidates should have a minimum of two years’ direct experience deploying and designing on the AWS cloud and be able to translate organizational requirements into solutions and recommend best practices. The associate credential is a mandatory prerequisite.
|Certification name||Certified Solution Architect – Associate
Certified Solution Architect – Professional
|Prerequisites and required courses||Associate: One year of hands-on experience recommended, AWS Certified Cloud Practitioner
Professional: Certified Solution Architect – Associate credential plus a minimum of two years of hands-on experience
|Number of exams||Associate: One exam (65 questions, 130 minutes to complete)
Professional: One exam (170 minutes to complete)
|Certification fees||Associate: $150 (practice exam $20)
Professional: $300 (practice exam $40)
|Self-study materials||AWS makes trial questions, practice exams, exam guides, whitepapers and more available on the certification home page.|
CTA: Certified Technical Architect
In 1999, Salesforce revolutionized the world of CRM when it introduced the concept of using the cloud to provide top-notch CRM software. Today, Salesforce has more than 150,000 customers, making it the industry leader for CRM enterprise cloud platforms. Currently, Salesforce offers solutions for various focus areas, including sales, service, marketing, commerce, engagement, community, productivity (Quip), platform and ecosystem, integration, analytics, enablement, internet of things (IoT), artificial intelligence, mobility, and industry (financial and health).
To meet industry needs for qualified and experienced professionals with the skills necessary to support its growing customer base, Salesforce developed and maintains a top-tier certification program. It offers many paths to candidates, including for administration, app building, architecture and marketing.
Salesforce Architect certifications are hierarchical, with most (but not all) lower-level credentials serving as prerequisites for more advanced credentials. At the top of the certification pyramid is the highest credential a Salesforce professional can earn – the Certified Technical Architect (CTA), which is our featured Salesforce certification.
The Salesforce Architect certification pyramid has three levels:
Salesforce requires CTAs to maintain current skills. Credential holders must pass maintenance module exams with each new product release cycle (typically in summer, winter and spring). While challenging to earn, the CTA is important for IT professionals who are serious about a Salesforce technologies career.
|Certification name||Certified Technical Architect (CTA)|
|Prerequisites and required courses||Salesforce Certified Application Architect and Salesforce Certified System Architect credential:
|Number of exams||One exam (four hours to complete; candidates must formulate, justify and present recommendations based on a hypothetical scenario to a review board)|
Retake fee: $3,000
|Self-study materials||Salesforce maintains links on the certification webpage to numerous review materials, including the online documentation, tip sheets, user guides, exam guide and outline, Architect Journey e-books, Trailhead trails, and the Salesforce Certification Guide.|
ITIL Master Certificate – IT Service Management
One of our favorite credential sets (and for employers as well, judging by job board numbers) is the ITIL for IT Service Management credentials from Axelos. Axelos is a global provider of standards designed to drive best practices and quality throughout organizations. ITIL (Information Technology Infrastructure Library) joined the Axelos family in 2013.
Axelos manages ITIL credentialing requirements and updates, provides accreditation to Examination Institutes (EIs), and licenses organizations seeking to use ITIL. In addition to ITIL certifications, Axelos offers credentials for Prince2 2017 (which includes Foundation, Practitioner and Agile qualifications), Prince2 Agile, Resilia, MSP, MoP, M_o_R, P30, MoV, P3M3 and AgileSHIFT.
ITIL is a set of well-defined and well-respected best practices that specifically target the area of IT service management. There are more than 2 million ITIL-certified practitioners worldwide. ITIL is perhaps the most widely known and globally adopted set of best practices and management tools for IT service management and support.
Axelos maintains a robust ITIL certification portfolio consisting of five ITIL credentials:
Axelos introduced ITIL 4 in early 2019. ITIL 3 practitioners should check the Axelos website frequently for updates about the transition to ITIL 4 and availability of the ITIL 4 transition modules.
The ITIL Master is the pinnacle ITIL certification, requiring experience, dedication, and a thorough understanding of ITIL principles, practices, and techniques. To gain the ITIL Master designation, candidates must have at least five years of managerial, advisory or other leadership experience in the field of IT service management. They must also possess the ITIL Expert certification. Once the skill and certification requirements are met, the real certification work begins.
Upon completing the prerequisites, candidates must register with PeopleCert, the sole approved Axelos Examination Institute, and submit an application. Next, candidates prepare and submit a proposal for a business improvement to implement within their organization. The proposal submission is followed by a “work package,” which documents a real-world project that encompasses multiple ITIL areas.
The work package (1) validates how the candidate applied ITIL principles, practices, and techniques to the project; and (2) documents the effectiveness of the solution and the ultimate benefit the business received as a result of the ITIL solution. Finally, candidates must pass an interview with an assessment panel where they defend their solution.
Axelos will soon be sponsoring 50 lucky people in their quest to obtain the ITIL 4 Master certification. You can register your interest in the program here.
|Certification name||ITIL Master Certificate – IT Service Management|
|Prerequisites and required courses||ITIL Expert Certificate: Five years of IT service experience in managerial, leadership or advisory roles|
|Number of exams||No exam required, but candidates must complete the following steps:
|Certification fees||$4,440 if all ITIL credits obtained through PeopleCert
$5,225 if some ITIL credits were obtained from other institutes
|Self-study materials||Axelos provides documentation to guide candidates in the preparation of proposal and work package submissions. Available documents include ITIL Master FAQs, ITIL Master Proposal Requirements and Scope, and ITIL Master Work Package Requirements and Scope.|
A leader in enterprise architecture, The Open Group’s standards and certifications are globally recognized. The TOGAF (The Open Group Architecture Framework) standard for enterprise architecture is popular among leading enterprise-level organizations. Currently, TOGAF is the development and architecture framework of choice for more than 80% of global enterprises.
TOGAF’s popularity reflects that the framework standard is specifically geared to all aspects of enterprise-level IT architectures, with an emphasis on building efficiency within an organization. The scope of the standard’s approach covers everything from design and planning stages to implementation, maintenance, and governance.
The Open Group offers several enterprise architect credentials, including TOGAF, Open CA, ArchiMate, IT4IT and the foundational Certified Technical Specialist (Open CTS).
The Open Group reports that there are more than 75,000 TOGAF-certified enterprise architects. At present, there are two TOGAF credentials: the TOGAF 9 Foundation (Level 1) and TOGAF 9 Certified (Level 2). (The TOGAF framework is currently based on version 9.2, although the credential name still reflects version 9.)
The TOGAF 9 Foundation, or Level 1, credential targets architects who demonstrate an understanding of TOGAF principles and standards. A single exam is required to earn the Level 1 designation. The Level 1 exam focuses on TOGAF-related concepts such as TOGAF reference models, terminology, core concepts, standards, ADM, architectural governance and enterprise architecture. The Level 1 credential serves as a steppingstone to the more advanced TOGAF Level 2 certification.
The TOGAF 9 Certified, or Level 2, credential incorporates all requirements for Level 1. Level 2 TOGAF architects possess in-depth knowledge of TOGAF standards and principles and can apply them to organizational goals and enterprise-level infrastructure. To earn this designation, candidates must first earn the Level 1 credential and pass the Level 2 exam. The Level 2 exam covers TOGAF concepts such as ADM phases, governance, content framework, building blocks, stakeholder management, metamodels, TOGAF techniques, reference models and ADM iterations.
Candidates wanting a fast track to Level 2 certification may take a combination exam, which covers requirements for both Level 1 and 2. Training is not mandatory for either credential but is highly recommended. Training classes run 2-5 days, depending on the provider and whether you’re taking the combined or single-level course. The Open Group maintains a list of approved training providers and a schedule of current training opportunities on the certification webpage.
|Certification name||TOGAF 9 Foundation (Level 1)
TOGAF 9 Certified (Level 2)
|Prerequisites and required courses||TOGAF 9 Foundation (Level 1): None
TOGAF 9 Certified (Level 2): TOGAF 9 Foundation (Level 1) credential
|Number of exams||Level 1: One exam (40 questions, 60 minutes, 55% required to pass)
Level 2: One exam (eight questions, 90 minutes)
Level 1 and 2 combined exam (48 questions, 2.5 hours)
|Certification fees||$320 each for Level 1 and Level 2 exams
$495 for combined Level 1 and Level 2 exam
Exams are administered by Pearson VUE. Some training providers include the exam with the training course.
|Self-study materials||A number of resources are available from The Open Group, including whitepapers, webinars, publications, TOGAF standards, the TOGAF Foundation Study Guide ($29.95 for PDF; includes practice exam), VCE exam (99 cents for PDF) and the TOGAF 9 Certified Study Guide (a combined study guide is available for $59.95). The Open Group also maintains a list of accredited training course providers and a calendar of training events.|
Zachman Certified – Enterprise Architect
Founded in 1990, Zachman International promotes education and research for enterprise architecture and the Zachman Framework. Rather than being a traditional process or methodology, the Zachman Framework is more accurately referred to as an “ontology.” Ontologies differ from a traditional methodology or process in that, rather than focusing on the process or implementation, they focus on the properties, types and interrelationships of entities that exist within a particular domain. The Zachman Framework ontology focuses on the structure, or definition, of the object and the enterprise. Developed by John Zachman, this framework sets a standard for enterprise architecture ontology.
Zachman International currently offers four enterprise architect credentials:
Zachman credentials are valid for three years. To maintain these credentials, candidates must earn continuing education credits (referred to as EADUs). The total number of EADUs required varies by certification level.
|Certification name||Enterprise Architect Associate Certification (Level 1)
Enterprise Architect Practitioner Certification (Level 2)
Enterprise Architect Professional Certification (Level 3)
Enterprise Architect Educator Certification (Level 4)
|Prerequisites and required courses||Level 1 Associate: Four-day Modeling Workshop ($3,499)
Level 2 Practitioner: None
Level 3 Professional: None
Level 4 Educator: Review all materials related to The Zachman Framework; Level 3 Professional recommended
|Number of exams||Level 1 Associate: One exam
Level 2 Practitioner: No exam; case studies and referee review required
Level 3 Professional: No exam; case studies and referee review required
Level 4 Educator: None; must develop and submit curriculum and course materials for review and validation
|Certification fees||Level 1 Associate: exam fee included as part of required course
Level 2 Practitioner: None, included as part of Level 1 required course
Level 3 Professional: Not available
Level 4 Educator: Not available
|Self-study materials||Live classroom and distance learning opportunities are available. Zachman also offers webcasts, a glossary, the Zachman Framework for Enterprise Architecture and reference articles.|
Beyond the top 5: More enterprise architect certifications
The Red Hat Certified Architect (RHCA) is a great credential, especially for professionals working with Red Hat Enterprise Linux.
The Project Management Professional (PMP) certification from PMI continues to appear in many enterprise architect job descriptions. Although the PMP is not an enterprise architect certification per se, many employers look for this particular combination of skills.
Outside of our top five vendor-neutral enterprise architect certifications (which focus on more general, heterogeneous views of IT systems and solutions), there are plenty of architect-level certifications from a broad range of vendors and sponsors, most of which are vendor-specific.
The table below identifies those vendors and sponsors, names their architect-level credentials, and provides links to more information on those offerings. Choosing one or more of these certifications for research and possible pursuit will depend on where you work or where you’d like to work.
<td”>EMC Cloud Architect Expert (EMCCAe) <td”>GoCertify </td”></td”>
|Sponsor||Enterprise architect certification||More information|
|BCS||BCS Practitioner Certificate in Enterprise and Solutions Architecture||BCS homepage|
|Cisco||Cisco Certified Architect (CCAr)||CCAr homepage|
|Enterprise Architecture Center of Excellence (EACOE)||EACOE Enterprise Architect
EACOE Senior Enterprise Architect
EACOE Distinguished Enterprise Architect EACOE Enterprise Architect Fellow
|EACOE Architect homepage|
|FEAC Institute||Certified Enterprise Architect (CEA) Black Belt
Associate Certified Enterprise Architect (ACEA) Green Belt
|FEAC CEA homepage|
|Hitachi Vantara||Hitachi Architect (three tracks: Infrastructure, Data Protection, and Pentaho Solutions)
Hitachi Architect Specialist (two tracks: Infrastructure and Converged)
|Training & Certification homepage|
|IASA||Certified IT Architect – Foundation (CITA-F)
Certified IT Architect – Associate (CITA-A)
Certified IT Architect – Specialist (CITA-S)
Certified IT Architect – Professional (CITA-P)
|National Instruments||Certified LabVIEW Architect (CLA)||CLA homepage|
|Nokia||Nokia Service Routing Architect (SRA)||SRA homepage|
|Oracle||Oracle Certified Master, Java EE Enterprise Architect Certified Master||Java EE homepage|
|Red Hat||Red Hat Certified Architect (RHCA)||RHCA homepage|
|SOA (Arcitura)||Certified SOA Architect||SOA Architect homepage|
These architect credentials typically represent pinnacle certifications within the programs to which they belong, functioning as high-value capstones to those programs in many cases. The group of individuals who attain such credentials is often quite small but comes with tight sponsor relationships, high levels of sponsor support and information delivery, and stratospheric salaries and professional kudos.
Often, such certifications provide deliberately difficult and challenging targets for a small, highly select group of IT professionals. Earning one or more of these certifications is generally the culmination of a decade or more of professional growth, high levels of effort, and considerable expense. No wonder, then, that architect certifications are highly regarded by IT pros and highly valued by their employers.
Enterprise architect credentials will often be dictated by choices that your employer (or industry sector, in the case of government or DoD-related work environments) have already made independent of your own efforts. Likewise, most of the vendor-specific architecture credentials make sense based on what’s deployed in your work environment or in a job you’d like to occupy.
Though there are lots of potential choices IT pros could make, the genuine number they can or should make will be influenced by their circumstances.
Thani Sokka has over 17 years of experience in systems engineering, enterprise architecture, design and development, software project management, and data/information modeling, working with the latest IT systems technologies and methodologies. He has spent significant time designing solutions for the public sector, media, retail, manufacturing, financial, biomedical, and social/gaming industries. At Google, Thani is a Strategic Account Manager focused on empowering Google Cloud Platform’s largest customers derive the most from Google’s cloud technologies, including it’s compute, storage, and big data solutions. He also works closely with the Google Cloud Platform Product Management and Product Engineering teams to help drive the direction of Google's Enterprise Cloud Platform business. Prior to Google, Thani was an enterprise architect at Oracle focused on helping Federal organizations implement SOA (Service Oriented Architecture) solutions. Thani also worked as a senior IT consultant at Booz Allen Hamilton, a lead software architect at Thomson Reuters, and a software engineer at MicroStrategy. Thani has achieved various IT certifications from organizations such as MicroStrategy, Oracle, and The Open Group (TOGAF). He holds a M.S. degree in Computer Science from Johns Hopkins University and a B.S. degree in Computer Science, Biomedical Engineering, and Electrical Engineering from Duke University.
I use Group Policy Editor to configure a lot of settings on Windows 11 or Windows 10. Recently when I tried opening it from Run prompt or directly through Control Panel, I received an error stating—Failed to open the Group Policy Object on this computer. You might not have the appropriate rights — unspecified error. If you get the same error, then here is how you can quickly fix the issue and gain back access to the Group Policy Editor.
The message was surprising because I had not changed anything that could have resulted in the error message. When I navigated to C:\Windows\System32\GroupPolicy, it had all the policies intact, but the Group Policy Editor wasn’t working. So here is what I did to resolve the issue. Make sure that your user account has Admin privileges.
There is one more way to fix this.
You can choose to delete all the files inside the Machine folder instead of renaming it. Windows will automatically recreate the required files when you relaunch the policy editor.
After going through Microsoft and Technet forums, I noticed some users reporting about the same, and one of them shared about the corruption of Registry.pol with Event ID 1096. The file stores Registry-based policy settings, which include Application Control Policies, Administrative Templates, and more. There was a log in the Event Viewer which pointed towards this corruption. The description stated:
The processing of Group Policy failed. Windows could not apply the registry-based policy settings for the Group Policy object LocalGPO. Group Policy settings will not be resolved until this event is resolved. View the event details for more information on the file name and path that caused the failure.
It affirms the user’s report, and what you can do is delete the Registry.pol file available inside the Machine folder, and launch Group Policy again.
Read: Computer policy could not be updated successfully, The processing of Group Policy failed.
I hope this helps you resolve the error.
Now read: How to reset all Local Group Policy settings to default in Windows 11/10.
In keeping with the understanding that knowledge is a public good and should be transmitted as broadly as possible, the faculty of Connecticut College has adopted an Open Access Policy. This policy was modeled on those already in place at both large research institutions and peer Oberlin Group member colleges.
The policy seeks to make scholarship produced by the faculty of the College freely available to all through our institutional repository, Digital Commons @ Connecticut College, unless prohibited by the licensing agreement between the author and publisher.
The policy will benefit the faculty, by increasing the potential audience for their scholarship; the College, by enhancing its research reputation; and the broader community, by insuring that scholars without access to research libraries will still be able to carry out their work.
The Open Access movement has gained considerable strength over the past decade with many funding agencies requiring free access to grant-funded research and over 100 colleges and universities adopting Open Access policies. We are excited to be a part of that movement.
To make participation in Connecticut College's Open Access policy simple, Information Services has developed a submission form. It can be found at http://www.conncoll.edu/camelweb/index.cfm?fuseaction=library_manuscripts.
If you have any questions about the Open Access policy, you may contact Ben Panciera.
Open Access is defined as the practice of offering scholarly research freely over the Internet. For practical purposes, when we talk of Open Access here at Connecticut College, we are referring to one of two things:
As a freely available resource on the Web, Open Access research offers faster access to content and can result in more readers than traditional research. Both forms of Open Access come in response to the rising cost of serial subscriptions. These escalating costs have limited access to scholarly research in colleges and universities, and have eliminated access for those researchers who do not have research libraries. Both of these limitations are particularly felt in areas of the world (Asia, Africa, and Latin America) with significantly fewer research libraries and research libraries with low budgets.
The faculty at Connecticut College adopted an Open Access policy in 2012. The Connecticut College policy deals only with author self-archiving and depositing peer-reviewed journal articles into Digital Commons upon their publication in a traditional journal.
Over 100 colleges and universities and a handful of governmental agencies have adopted policies in recent years. These include major research institutions like Harvard, MIT, Princeton, Texas, and Stanford and liberal arts colleges like Oberlin, Bryn Mawr, Bucknell, and Lafayette.
When a faculty member has an article accepted for publication at a peer-reviewed journal, he or she will forward to Information Services an electronic copy of the manuscript after peer review, but before the publisher has finalized it (i.e. the post-print copy). Consulting with the faculty member, IS staff will then determine whether it is permitted to place the article online and under what conditions. About 70% of scholarly journals allow for some form of free republication of traditionally published research.
Most publishers require that we explicitly indicate the journal, issue, and page numbers for the published version of the article. Many also require that we create a link to the subscription-only version of the article online. Some publishers allow (or even require) that the post-print is replaced with a pdf of the final published version as it appears in the journal. Some publishers require an embargo on Open Access self-archiving that may range from six to twenty-four months.
If no self-archiving is allowed, the article will not be posted online. There will be no action contrary to any publisher’s policy concerning republication. If the author wishes, IS staff can create a record for the article in Digital Commons and link to the subscription-only version online.
Yes. It will not be required under the proposed Open Access policy, but conference papers, reviews, articles for non peer-reviewed publications, fiction, poetry, etc. may be posted in Digital Commons, as long as it is allowed by the publisher. Post-prints of articles published before the adoption of the policy may be posted online as requested.
Several studies of self-archived research in the natural and social sciences have shown that these Open Access articles receive more citations than articles that are not self archived in the same issues of the same journals. The number of citations is also more likely to hold steady or increase over time. You can also learn about the downloading history of your work, along with the search terms researchers used to find your work, in the monthly author report received from Digital Commons. These reports will tell you not only the number of times an article was downloaded, but also the domains from which the obtain request came.
The College benefits from the higher profile of its faculty research. The College community also secures greater access to the research produced by its own faculty. In one study, it was found that over 20% of the research published by Connecticut College faculty is not in journals that the library can subscribe to.
Yes, the broader scholarly community gains access to the research produced by the faculty of the College. This is of critical importance to independent scholars without consistent access to a research library and to those at institutions in the United States and abroad that can’t afford expensive journal subscriptions.
The Open Access policy requests that authors grant a license to Connecticut College to freely display their research on the Internet, subject to the terms and conditions of the authors’ agreements with their publishers. The author or publisher will continue to retain copyright. All of the rights and duties that exist in traditional publication remain in an Open Access environment, including the ability to prosecute in cases of piracy or plagiarism.
Follow this link to see one author’s articles in Digital Commons:
If you want to maintain your own website, the best solution would be to link from your site to the archived copy in Digital Commons. Digital Commons presents several advantages for the author. There are multiple backup systems for the Digital Commons servers. Documents in Digital Commons are more visible to search engines. Digital Commons also compiles monthly reports for authors documenting the number of downloads of each paper and the search strings or referring sites researchers used to find each paper.
The proposed policy has an opt-out provision; no member of the faculty will be forced to publish in Open Access.
It is the practice in self-archiving that coauthors do not need to be notified in advance of their paper being placed online. Repositories do indicate all authors of a paper and most list institutional affiliation at the time of publication. If you wish to notify your coauthors in advance of making your article available, you are free to do so. If you do not want to make your paper available because you cannot notify your coauthors, that is permissible under the proposed policy.
Daniel Hatter began writing professionally in 2008. His writing focuses on syllabus in computers, Web design, software development and technology. He earned his Bachelor of Arts in media and game development and information technology at the University of Wisconsin-Whitewater.
İşbank and Schneider Electric recognised for building their technology strategies around customers to fuel business growth
LONDON, August 22, 2023--(BUSINESS WIRE)--Forrester (Nasdaq: FORR) today announced that İşbank and Schneider Electric are the winners of its Technology Strategy Impact and Enterprise Architecture Awards for Europe, the Middle East, and Africa (EMEA), respectively. These awards, which will be presented at Technology & Innovation EMEA, recognise both organisations for executing technology strategies that accelerate business growth and drive customer outcomes.
İşbank, the largest private bank in Turkey, is this year’s recipient of Forrester’s Technology Strategy Impact Award for EMEA. The bank is being honoured for implementing a digital strategy to Strengthen its operational efficiency and resilience while meeting the real-time needs of its millions of customers. In addition to leveraging AI to deliver customer insights, İşbank is also contributing to sustainability efforts through its mobile banking app, which is designed to educate customers on how to reduce their carbon footprint.
Schneider Electric, a French multinational provider of digital automation and energy management services, is this year’s recipient of Forrester’s Enterprise Architecture Award, presented in partnership with The Open Group, author of the TOGAF® standard, which was developed by The Open Group Architecture Forum. The firm’s enterprise architecture is delivering seamless customer experiences while also eliminating complexities surrounding its products, systems, and supply chain. Schneider Electric has also been named an EMEA finalist for Forrester’s 2023 Technology Strategy Impact Award, along with multinational banking and financial services company PKO Bank Polski. African fintech company Interswitch Group is the 2023 finalist for Forrester’s Enterprise Architecture Award for EMEA.
Forrester Technology Award recipients will share their success stories at Technology & Innovation EMEA, taking place in London and digitally, October 12–13, 2023, a leading event for chief information officers, chief technology officers, chief digital officers, and other technology leaders to learn best practices and tools to home in on the technology strategy best suited to fuel their business growth.
"We are excited to celebrate Forrester’s Technology Award winners for EMEA, each of which is demonstrating a laser focus on customer and business outcomes," said Laura Koetzle, VP, group research director at Forrester. "In using adaptivity, creativity, and resilience to reconfigure their technology strategies and capabilities, these companies can better meet future customer and employee demands while also driving business growth."
Explore the changing role of technology leaders and how they can accelerate growth at their organisations.
Discover how technology leaders can build an efficient tech strategy and architecture to drive innovation.
Forrester (Nasdaq: FORR) is one of the most influential research and advisory firms in the world. We help leaders across technology, customer experience, digital, marketing, sales, and product functions use customer obsession to accelerate growth. Through Forrester’s proprietary research, consulting, and events, leaders from around the globe are empowered to be bold at work — to navigate change and put their customers at the centre of their leadership, strategy, and operations. Our unique insights are grounded in annual surveys of more than 700,000 consumers, business leaders, and technology leaders worldwide; rigorous and objective research methodologies, including Forrester Wave™ evaluations; 100 million real-time feedback votes; and the shared wisdom of our clients. To learn more, visit Forrester.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230822882615/en/
The early decades of software development generally ran on a culture of open access and free exchange, where engineers could dive into each other’s code across time zones and institutions to make it their own or squash a few bugs. But this new printer ran on inaccessible proprietary software. Stallman was locked out—and enraged that Xerox had violated the open code-sharing system he’d come to rely on.
A few years later, in September 1983, Stallman released GNU, an operating system designed to be a free alternative to one of the dominant operating systems at the time: Unix. Stallman envisioned GNU as a means to fight back against the proprietary mechanisms, like copyright, that were beginning to flood the tech industry. The free-software movement was born from one frustrated engineer’s simple, rigid philosophy: for the good of the world, all code should be open, without restriction or commercial intervention.
Today, 96% of all code bases incorporate open-source software. GitHub, the biggest platform for the open-source community, is used by more than 100 million developers worldwide. The Biden administration’s Securing Open Source Software Act of 2022 publicly recognized open-source software as critical economic and security infrastructure. Even AWS, Amazon’s money-making cloud arm, supports the development and maintenance of open-source software; it committed its portfolio of patents to an open use community in December of last year. Over the last two years, while public trust in private technology companies has plummeted, organizations including Google, Spotify, the Ford Foundation, Bloomberg, and NASA have established new funding for open-source projects and their counterparts in open science efforts—an extension of the same values applied to scientific research.
The fact that open-source software is now so essential means that long-standing leadership and diversity issues in the movement have become everyone’s problems. Many open-source projects began with “benevolent dictator for life” (BDFL) models of governance, where original founders hang on to leadership for years—and not always responsibly. Stallman and some other BDFLs have been criticized by their own communities for misogynistic or even abusive behavior. Stallman stepped down as president of the Free Software Foundation in 2019 (although he returned to the board two years later). Overall, open-source participants are still overwhelmingly white, male, and located in the Global North. Projects can be overly influenced by corporate interests. Meanwhile, the people doing the hard work of keeping critical code healthy are not consistently funded. In fact, many major open-source projects still operate almost completely on volunteer steam.
The 2010s backlash against tech’s unfettered growth, and the latest AI boom, have focused a spotlight on the open-source movement’s ideas about who has the right to use other people’s information online and who benefits from technology. Clement Delangue, CEO of the open-source AI company Hugging Face, which was recently valued at $4 billion, testified before Congress in June of 2023 that “ethical openness” in AI development could help make organizations more compliant and transparent, while allowing researchers beyond a few large tech companies access to technology and progress. “We’re in a unique cultural moment,” says Danielle Robinson, executive director of Code for Science and Society, a nonprofit that provides funding and support for public-interest technology. “People are more aware than ever of how capitalism has been influencing what technologies get built, and whether you have a choice to interact with it.” Once again, free and open-source software have become a natural home for the debate about how technology should be.
Free as in freedom
The early days of the free-software movement were fraught with arguments about the meaning of “free.” Stallman and the Free Software Foundation (FSF), founded in 1985, held firm to the idea of four freedoms: people should be allowed to run a program for any purpose, study how it works from the source code and change it to meet their needs, redistribute copies, and distribute modified versions too. Stallman saw free software as an essential right: “Free as in free speech, not free beer,” as his apocryphal slogan goes. He created the GNU General Public License, what’s known as a “copyleft” license, to ensure that the four freedoms were protected in code built with GNU.
Linus Torvalds, the Finnish engineer who in 1991 created the now ubiquitous Unix alternative Linux, didn’t buy into this dogma. Torvalds and others, including Microsoft’s Bill Gates, believed that the culture of open exchange among engineers could coexist with commerce, and that more-restrictive licenses could forge a path toward both financial sustainability and protections for software creators and users. It was during a 1998 strategic meeting of free-softwareadvocates—which notably did not include Stallman—that this pragmatic approach became known as “open source.” (The term was coined and introduced to the group not by an engineer, but by the futurist and nanotechnology scholar Christine Peterson.)
Karen Sandler, executive director of the Software Freedom Conservancy, a nonprofit that advocates for free and open-source software, saw firsthand how the culture shifted from orthodoxy to a big-tent approach with room for for-profit entities when she worked as general counsel at the Software Freedom Law Center in the early 2000s. “The people who were ideological—some of them stayed quite ideological. But many of them realized, oh, wait a minute, we can get jobs doing this. We can do well by doing good,” Sandler remembers. By leveraging the jobs and support that early tech companies were offering, open-source contributors could sustain their efforts and even make a living doing what they believed in. In that manner, companies using and contributing to free and open software could expand the community beyond volunteer enthusiasts and Strengthen the work itself. “How could we ever make it better if it’s just a few radical people?” Sandler says.
As the tech industry grew around private companies like Sun Microsystems, IBM, Microsoft, and Apple in the late ’90s and early ’00s, new open-source projects sprang up, and established ones grew roots. Apache emerged as an open-source web server in 1995. Red Hat, a company offering enterprise companies support for open-source software like Linux, went public in 1999. GitHub, a platform originally created to support version control for open-source projects, launched in 2008, the same year that Google released Android, the first open-source phone operating system. The more pragmatic definition of the concept came to dominate the field. Meanwhile, Stallman’s original philosophy persisted among dedicated groups of believers—where it still lives today through nonprofits like FSF, which only uses and advocates for software that protects the four freedoms.
“If a company only ends up just sharing, and nothing more, I think that should be celebrated.”
As open-source software spread, a bifurcation of the tech stack became standard practice, with open-source code as the support structure for proprietary work. Free and open-source software often served in the underlying foundation or back-end architecture of a product, while companies vigorously pursued and defended copyrights on the user-facing layers. Some estimate that Amazon’s 1999 patent on its one-click buying process was worth $2.4 billion per year to the company until it expired. It relied on Java, an open-source programming language, and other open-source software and tooling to build and maintain it.
Today, corporations not only depend on open-source software but play an enormous role in funding and developing open-source projects: Kubernetes (initially launched and maintained at Google) and Meta’s React are both robust sets of software that began as internal solutions freely shared with the larger technology community. But some people, like the Software Freedom Conservancy’s Karen Sandler, identify an ongoing conflict between profit-driven corporations and the public interest. “Companies have become so savvy and educated with respect to open-source software that they use a ton of it. That’s good,” says Sandler. At the same time, they profit from their proprietary work—which they sometimes attempt to pass off as open too, a practice the scholar and organizer Michelle Thorne dubbed “openwashing” in 2009. For Sandler, if companies don’t also make efforts to support user and creator rights, they’re not pushing forward the free and open-source ethos. And she says for the most part, that’s indeed not happening: “They’re not interested in giving the public any appreciable rights to their software.”
Others, including Kelsey Hightower, are more sanguine about corporate involvement. “If a company only ends up just sharing, and nothing more, I think that should be celebrated,” he says. “Then if for the next two years you allow your paid employees to work on it, maintaining the bugs and issues, but then down the road it’s no longer a priority and you choose to step back, I think we should thank [the company] for those years of contributions.”
In stark contrast, FSF, now in its 38th year, holds firm to its original ideals and opposes any product or company that does not support the ability for users to view, modify, and redistribute code. The group today runs public action campaigns like “End Software Patents,” publishing articles and submitting amicus briefs advocating the end of patents on software. The foundation’s executive director, Zoë Kooyman, hopes to continue pushing the conversation toward freedom rather than commercial concerns. “Every belief system or form of advocacy needs a far end,” she says. “That’s the only way to be able to drive the needle. [At FSF], we are that far end of the spectrum, and we take that role very seriously.”
Free as in puppy
Forty years on from the release of GNU, there is no singular open-source community, “any more than there is an ‘urban community,’” as researcher and engineer Nadia Asparouhova (formerly Eghbal) writes in her 2020 book Working in Public: The Making and Maintenance of Open Source Software. There’s no singular definition, either. The Open Source Initiative (OSI) was founded in 1998 to steward the meaning of the phrase, but not all modern open-source projects adhere to the 10 specific criteria OSI laid out, and other definitions appear across communities. Scale, technology, social norms, and funding also range widely from project to project and community to community. For example, Kubernetes has a robust, organized community of tens of thousands of contributors and years of Google investment. Salmon is a niche open-source bioinformatics research tool with fewer than 50 contributors, supported by grants. OpenSSL, which encrypts an estimated 66% of the web, is currently maintained by 18 engineers compensated through donations and elective corporate contracts.
The major discussions now are more about people than technology: What does healthy and diverse collaboration look like? How can those who support the code get what they need to continue the work? “How do you include a voice for all the people affected by the technology you build?” asks James Vasile, an open-source consultant and strategist who sits on the board of the Electronic Frontier Foundation. “These are big questions. We’ve never grappled with them before. No one was working on this 20 years ago, because that just wasn’t part of the scene. Now it is, and we [in the open-source community] have the chance to consider these questions.”
“We need designers, ethnographers, social and cultural experts. We need everyone to be playing a role in open source.”
“Free as in puppy,” a phrase that can be traced back to 2006, has emerged as a valuable definition of “free” for modern open-source projects—one that speaks to the responsibilities of creators and users to each other and the software, in addition to their rights. Puppies need food and care to survive; open-source code needs funding and “maintainers,” individuals who consistently respond to requests and feedback from a community, fix bugs, and manage the growth and scope of a project. Many open-source projects have become too big, complicated, or important to be governed by one person or even a small group of like-minded individuals. And open-source contributors have their own needs and concerns, too. A person who’s good at building may not be good at maintaining; someone who creates a project may not want to or be able to run it indefinitely. In 2018, for instance, Guido van Rossum, the creator of the open-source programming language Python, stepped down from leadership after almost 30 years, exhausted from the demands of the mostly uncompensated role. “I’m tired,” he wrote in his resignation message to the community, “and need a very long break.”
Supporting the people who create, maintain, and use free and open-source software requires new roles and perspectives. Whereas the movement in its early days was populated almost exclusively by engineers communicating across message boards and through code, today’s open-source projects invite participation from new disciplines to handle logistical work like growth and advocacy, as well as efforts toward greater inclusion and belonging. “We’ve shifted from open source being about just the technical stuff to the broader set of expertise and perspectives that are required to make effective open-source projects,” says Michael Brennan, senior program officer with the Technology and Society program at the Ford Foundation, which funds research into open internet issues. “We need designers, ethnographers, social and cultural experts. We need everyone to be playing a role in open source if it’s going to be effective and meet the needs of the people around the world.”
One powerful source of support arrived in 2008 with the launch of GitHub. While it began as a version control tool, it has grown into a suite of services, standards, and systems that is now the “highway system” for most open-source development, as Asparouhova puts it in Working in Public. GitHub helped lower the barrier to entry, drawing wider contribution and spreading best practices such as community codes of conduct. But its success has also given a single platform vast influence over communities dedicated to decentralized collaboration.
Demetris Cheatham, until recently GitHub’s senior director for diversity and inclusion strategy, took that responsibility very seriously. To find out where things stood, the company partnered with the Linux Foundation in 2021 on a survey and resulting report on diversity and inclusion within open source. The data showed that despite a pervasive ethos of collaboration and openness (more than 80% of the respondents reported feeling welcome), communities are dominated by contributors who are straight, white, male, and from the Global North. In response, Cheatham, who is now the company’s chief of staff, focused on ways to broaden access and promote a sense of belonging. GitHub launched All In for Students, a mentorship and education program with 30 students drawn primarily from historically Black colleges and universities. In its second year, the program expanded to more than 400 students.
Representation has not been the only stumbling block to a more equitable open-source ecosystem. The Linux Foundation report showed that only 14% of open-source contributors surveyed were getting paid for their work. While this volunteer spirit aligns with the original vision of free software as a commerce-free exchange of ideas, free labor presents a major access issue. Additionally, 30% of respondents in the survey did not trust that codes of conduct would be enforced—suggesting they did not feel they could count on a respectful working environment. “We’re at another inflection point now where codes of conduct are great, but they’re only a tool,” says Code for Science and Society’s Danielle Robinson. “I’m starting to see larger cultural shifts toward rethinking extractive processes that have been a part of open source for a long time.” Getting maintainers paid and connecting contributors with support are now key to opening up open source to a more diverse group of participants.
With that in mind, this year GitHub established resources specifically for maintainers, including workshops and a hub of DEI tools. And in May, the platform launched a new project to connect large, well-resourced open-source communities with smaller ones that need help. Cheatham says it’s crucial to the success of any of these programs that they be shared for free with the broader community. “We’re not inventing anything new at all. We’re just applying open-source principles to diversity, equity, and inclusion,” she says.
GitHub’s influence over open source may be large, but it is not the only group working to get maintainers paid and expand open-source participation. The Software Freedom Conservancy’s Outreachy diversity initiative offers paid internships; as of 2019, 92% of past Outreachy interns have identified as women and 64% as people of color. Open-source fundraising platforms like Open Collective and Tidelift have also emerged to help maintainers tap into resources.
The philanthropic world is stepping up too. The Ford Foundation, the Sloan Foundation, Omidyar Network, and the Chan Zuckerberg Initiative, as well as smaller organizations like Code for Science and Society, have all recently begun or expanded their efforts to support open-source research, contributors, and projects—including specific efforts promoting inclusion and diversity. Govind Shivkumar from Omidyar Network told MIT Technology Review that philanthropy is well positioned to establish funding architecture that could help prove out open-source projects, making them less risky prospects for future governmental funding. In fact, research supported by the Ford Foundation’s Digital Infrastructure Fund contributed to Germany’s latest creation of a national fund for open digital infrastructure. Momentum has also been building in the US. In 2016 the White House began requiring at least 20% of government-developed software to be open source. Last year’s Securing Open Source Software Act passed with bipartisan support, establishing a framework for attention and investment at the federal level toward making open-source software stronger and more secure.
The fast-approaching future
Open source contributes valuable practices and tools, but it may also offer a competitive advantage over proprietary efforts. A document leaked in May from Google argued that open-source communities had pushed, tested, integrated, and expanded the capabilities of large language models more thoroughly than private efforts could’ve accomplished on their own: “Many of the new ideas [in AI development] are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.” The recently articulated concept of Time till Open Source Alternative (TTOSA)—the time between the release of a proprietary product and an open-source equivalent—also speaks to this advantage. One researcher estimated the average TTOSA to be seven years but noted that the process has been speeding up thanks to easy-to-use services like GitHub.
At the same time, much of our modern world now relies on underfunded and rapidly expanding digital infrastructure. There has long been an assumption within open source that bugs can be identified and solved quickly by the “many eyes” of a wide community—and indeed this can be true. But when open-source software affects millions of users and its maintenance is handled by handfuls of underpaid individuals, the weight can be too much for the system to bear. In 2021, a security vulnerability in a popular open-source Apache library exposed an estimated hundreds of millions of devices to hacking attacks. Major players across the industry were affected, and large parts of the internet went down. The vulnerability’s lasting impact is hard to quantify even now.
Other risks emerge from open-source development without the support of ethical guardrails. Proprietary efforts like Google’s Bard and OpenAI’s ChatGPT have demonstrated that AI can perpetuate existing biases and may even cause harm—while also not providing the transparency that could help a larger community audit the technology, Strengthen it, and learn from its mistakes. But allowing anyone to use, modify, and distribute AI models and technology could accelerate their misuse. One week after Meta began granting access to its AI model LLaMA, the package leaked onto 4chan, a platform known for spreading misinformation. LLaMA 2, a new model released in July, is fully open to the public, but the company has not disclosed its training data as is typical in open-source projects—putting it somewhere in between open and closed by some definitions, but decidedly not open by OSI’s. (OpenAI is reportedly working on an open-source model as well but has not made a formal announcement.)
“There are always trade-offs in the decisions you make in technology,” says Margaret Mitchell, chief ethics scientist at Hugging Face. “I can’t just be wholeheartedly supportive of open source in all cases without any nuances or caveats.” Mitchell and her team have been working on open-source tools to help communities safeguard their work, such as gating mechanisms to allow collaboration only at the project owner’s discretion, and “model cards” that detail a model’s potential biases and social impacts—information researchers and the public can take into consideration when choosing which models to work with.
Open-source software has come a long way since its rebellious roots. But carrying it forward and making it into a movement that fully reflects the values of openness, reciprocity, and access will require careful consideration, financial and community investment, and the movement’s characteristic process of self-improvement through collaboration. As the modern world becomes more dispersed and diverse, the skill sets required to work asynchronously with different groups of people and technologies toward a common goal are only growing more essential. At this rate, 40 years from now technology might look more open than ever—and the world may be better for it.
© Copyright 2023 Technology Review, Inc. Distributed by TRIBUNE CONTENT AGENCY, LLC.
To simplify your arrival at the office in the morning, employ the use of bookmark groups to check your company's website stats and catch up on industry news. Firefox enables you to group pages in the Bookmarks library, enabling rapid access to many pages at once. With just a few mouse clicks, Firefox will have your mundane morning tasks out of the way and you'll be meeting with important clients and brainstorming ideas for a new marketing campaign.
The Russian authorities have opened a criminal investigation into one of the leaders of a prominent independent election monitoring group, his lawyer said Thursday.
The case against Grigory Melkonyants, co-chair of Russia's leading election watchdog Golos, is the latest step in the months-long crackdown on Kremlin critics and rights activists that the government ratcheted up after sending troops into Ukraine.
Melkonyants' lawyer Mikhail Biryukov told The Associated Press that his client is facing charges of "organizing activities" of an "undesirable" group, a criminal offense punishable by up to six years in prison.
Golos has not been labeled "undesirable" — a label that under a 2015 law makes involvement with such organizations a criminal offense. But it was once a member of the European Network of Election Monitoring Organizations, a group that was declared "undesirable" in Russia in 2021.
Police raided the homes of a further 14 Golos members on Thursday in eight different cities, Russia's state news agency RIA Novosti reported. Melkonyants' apartment in Moscow was also raided, and he was taken in for questioning.
REGIONAL BRANCH OF RUSSIAN ELECTION MONITOR NGO GOLOS SHUT DOWN FOR 3 MONTHS
In an interview with the AP Thursday, David Kankiya, a governing council member at Golos, linked the pressure on the group to the upcoming regional elections in Russia in September and the presidential election that is expected to take place in the spring of 2024. "We see this as a form of political pressure and an attempt to stifle our activities in Russia," Kankiya said.
Golos was founded in 2000 and has since played a key role in independent monitoring of elections in Russia. Over the years, it has faced mounting pressure from the authorities. In 2013, the group was designated as a "foreign agent" — a label that implies additional government scrutiny and carries strong pejorative connotations. Three years later, it was liquidated as a non-governmental organization by Russia's Justice Ministry.
Golos has continued to operate without registering as an NGO, exposing violations at various elections, and 2021 it was added to a new registry of "foreign agents," created by the Justice Ministry for groups that are not registered as a legal entity in Russia.
RUSSIAN VOTERS KEEP PUTIN IN POWER UNTIL 2036
Independent journalists, critics, activists and opposition figures in Russia have come under increasing pressure from the government in latest years which intensified significantly amid the conflict in Ukraine. Multiple independent news outlets and rights groups have been shut down, labeled as "foreign agents," or outlawed as "undesirable. Activists and critics of the Kremlin have faced criminal charges.
The authorities have also banned popular social media platforms, such as Facebook, Instagram and Twitter, and have targeted other online services with hefty fines.
CLICK HERE TO GET THE FOX NEWS APP
On Thursday, a Russian court imposed a $32,000 fine on Google for failing to delete allegedly false information about the conflict in Ukraine. The move by a magistrate’s court follows similar actions in early August against Apple and the Wikimedia Foundation that hosts Wikipedia.
According to Russian news reports, the court found that the YouTube video service, which is owned by Google, was guilty of not deleting videos with incorrect information about the conflict — which Russia characterizes as a "special military operation".