This is the continuation of a series of posts where I create a schematic and PCB in various EDA tools. Already, we’ve looked at Eagle CAD, KiCad, and took a walk down memory lane with one of the first PCB design tools for the IBM PC with Protel Autotrax. One of the more controversial of these tutorials was my post on Fritzing. Fritzing is a terrible tool that you should not use, but before I get to that, I need to back up and explain what this series of posts is all about.
The introduction to this series of posts laid it out pretty bare. For each post in this series, I will take a reference schematic for a small, USB-enabled ATtiny85 development board. I recreate the schematic, recreate the board, and build a new symbol and footprint in each piece of software. That last part — making a new symbol and footprint — is a point of contention for Fritzing users. You cannot create a completely new part in Fritzing. That’s a quote straight from the devs. For a PCB design tool, it’s a baffling decision, and I don’t know if I can call Fritzing a PCB design tool now.
If you’re like the majority of desktop or laptop users, the easiest tool to make pixel art is Microsoft Paint. With MS Paint, you can edit individual pixels, select colors, and even do flood fills. It’s exactly what you need if you want to create pixel art quickly with a tool that’s easy to use. There are better tools to create pixel art, though. Photoshop lets you zoom in to see individual pixels and has transparency and layers, and Aseprite is a professional tool specifically designed for the creation and animation of pixel art.
It’s easy to draw parallels between KiCad, Fritzing, MS Paint, Photoshop, and Aseprite. Fritzing and MS Paint are easy-to-learn tools where you can produce acceptable results quickly. This is a false equivalency, though; you can do anything you want in MS Paint, but you can’t do anything you want in Fritzing because you can’t add custom parts. Fritzing is a tool just like MS Paint if MS Paint didn’t have the color blue.
Creating a custom part is necessary functionality of a PCB design tool. The first PCB design tool released for the PC had this functionality. Without the ability to create custom parts, Fritzing cannot legitimately call itself a PCB design tool and should not be used as such.
The Fritzing FAQ is wrong. Of course you can make custom parts in Fritzing. This summer, Adafruit created a whole bunch of Fritzing parts that still haven’t been added to the core libraries. Instead of complaining about the relatively small core library, or the difficulty in adding custom parts, I’m going to do something better: for the next two thousand words, I’m going to demonstrate how to create a custom part in Fritzing.
It should be noted that since the introduction of the new Fritzing Parts Editor introduced in version 0.7.9 (the version that took away the ability to create custom parts), there have been no tutorials on how to create a custom part in Fritzing. This is the first such tutorial and by definition the best tutorial on creating custom parts in Fritzing. I encourage the Fritzing team to post a link to this tutorial on their blog and FAQ.
With the justification of why you should never use Fritzing and why this tutorial is necessary, let’s begin. This is how you create a custom part in Fritzing.
The picture above is of an ATtiny2313, a part not in the Fritzing core library. I created this part in just a few minutes using tools built right into Fritzing. Yes, you can make your own parts in Fritzing. Here’s how I did it.
From Fritzing’s ‘Core Parts’ selector, take the generic IC part and drop it onto the breadboard view. In the Inspector window, you will find options for what type of package this part is, how many pins, it’s label, and even the pin spacing. If you want to drop a 40-pin CERDIP 6502 into your Fritzing project, you can do that. If you want to drop a 64-pin Motorola 68000 into your Fritzing project you can do that. If, for some reason, you want to add an IC that isn’t in the core Fritzing library, you can do that too. All of this is done semi-automagically by Fritzing. All you need to do is tell Fritzing the number of pins, and what package it comes in.
What’s the bottom line? If you’re dealing with a DIP chip, a QFN, SOIC, or some other standard package, you can probably make a Fritzing part in about three minutes. Is this making a part from scratch? No, but for most use cases, this is all you need.
The challenge for this tutorial was to create a part from scratch. To that end, I’m going to build a purple and gold 64-pin DIP Motorola 68000. Why not, right?
Download Inkscape. It’s like Illustrator, only it doesn’t send your soul back to the Adobe mothership. Select File -> Document Properties, and set the size of the canvas to 3.2 x .98 inches. While you’re in that window, set the default measurement unit to ‘inches’.
The width of the canvas is the nominal width of the package, and the height will be the nominal height of the package plus space for the pins. The pins will be squares with a dimension of 0.04 x 0.04 inches, so add 0.08 inches to the top and bottom of the canvas.
With the dimensions of the canvas set, draw a rectangle. If you’re feeling exceptionally artistic, make the rectangle purple and add some gold accents. Now it’s time to add pins. This is a 64-pin device, so add sixty-four rectangles. Use Inkscape to arrange and distribute them logically. In Inkscape’s ‘Object Properties’ window (Shift+Ctrl+O), set the ID of each rectangle to ‘connector0pin’ to ‘connector63pin’. Yes, Fritzing uses zero-indexed numbers to label all the pins on the breadboard view.
Once all the pins are labeled, select all, group everything and name this group ‘breadboard’ in the Object Properties window. Save this file to your desktop as a plain SVG (not an Inkscape SVG). That’s it for the Inkscape portion of building the breadboard part. Now we take it over to Fritzing.
In Fritzing, create a new part just like you did in the ‘Easy, Dumb Way’ above. In the Parts Editor, select File -> Load Image For View, and select the SVG you just saved from Inkscape. You’ll get something that looks like this:
Yes, the font changed, but whatever. This is the closest anyone has ever gotten to building a custom part in Fritzing. On the right side of the screen, there’s a list of connectors, with a button labeled ‘select graphic’ next to each pin. For each pin on our 64-pin monster, click the ‘select graphic’ button, and then click the gray rectangle of the corresponding pin. This shouldn’t be necessary if you labeled your parts correctly in Fritzing, but it’s another option for you.
Save the part, open up a new Fritzing window, and here’s what you get:
To reiterate, this is a custom part, with a custom breadboard view. There are no other tutorials that tell you how to do this. You’re welcome.
The breadboard view is only one-third of what’s required to make a part in Fritzing. Now we’re going to move on to the schematic view. This is a simplified view of the part that shows the functions of all the pins.
First, create a new Inkscape document with a width of 1.5 inches and a height of 3.3 inches. If you’re making a DIP schematic, the formula to calculate the height of a part is ([number of pins on one side] + 1) * 0.1. For a 64-pin chip with 32 pins on a side, it’s 33*0.1 = 3.3.
The body of the schematic footprint is a rectangle, no fill, black outline, with a 1px stroke width. The pins are a straight line, 0.25 inches long, arranged along the side of the black rectangle on 0.1″ centers.
Right now we have a simplified version of what the schematic footprint should look like. Yes, we’re missing labels for all the pins, but something even more important is missing: the IC terminals, or where the lines on the schematic connect to. Fritzing thinks these should be rectangles 0.2 pixels square (yes, point two pixels), so we need to add these to the end of every pin on this footprint.
Create a 0.2 by 0.2 pixel rectangle on the tip of every leg of the schematic, and label them in the Object Properties dialog as ‘connector0terminal’ through ‘connector63terminal’. Once that’s done, label the pins in the Object Properties dialog as ‘connector0pin’ through ‘connector63pin’. Yes, that’s one hundred and twenty-eight things you need to rename. It’ll take a while. When that’s done, save it as an SVG, go to the parts editor in fritzing, and select File -> Load Image For View and choose the file you just created in Inkscape. Here’s what you’ll get:
I’ve added a few things to this schematic view, most obvious is the pin labels. Other than that, it’s pretty standard, and now we’re almost done creating a part from scratch in Fritzing.
You know the drill by now. Create a new Inkscape document. The dimensions of the canvas are (width of the package + 0.02 inches) by (height of the package + 0.02 inches). For the 68000, that’s 3.22 inches by 0.92 inches. Your pads are just circles, with no fill, and some sort of yellow stroke. Arranging these pads is left as an exercise to the reader.
Fritzing requires you to name these pads, so name them ‘connector0pin’ through ‘connector63pin’. Group all of these pads and call that group ‘copper0’, then group them again and call that group ‘copper1’. This is, ostensibly, for the top and bottom copper layers.
Save this as a regular SVG, open up Fritzing, go to the Part Editor, and replace the PCB footprint with the SVG you just saved.
With that, we’re done. That’s how you create a part mostly from scratch in Fritzing. Hit save, close Fritzing, and throw your computer in the garbage. It’s tainted now.
Admittedly, I didn’t make this easy on myself by creating a 64-pin DIP from scratch in Fritzing. Making a part in Fritzing is a tedious process and should not be done by anyone. It’s possible, though, and if you have enough time on your hands, you can create beautiful vector graphics that are also real, working parts in Fritzing.
Supporters of Fritzing say its greatest strength is that it’s an easy tool to use, and useful if you want to whip up a quick PCB for prototyping. They are correct, so long as all the parts you want to use are already in Fritzing’s core libraries. It is possible to create parts from scratch, but this is a task that could be done faster in literally any other PCB design program. What we’re looking at here is a walled garden problem, and for the second most popular Open Source PCB design software, this isn’t doing Fritzing any favors.
It should be noted, however, that many of the tasks required to make a Fritzing part can be automated. PCB and schematic footprints can be auto-generated. In theory, a simple command line tool could tie these parts directly to breadboard footprints. If anyone wants to contribute to Open Source in a meaningful way, there’s a project for you: make a tool that takes an SVG of a chip or component and turns it into a Fritzing part.
Closing out this tutorial, I’d like to thank [Arsenijs] who created the first tutorial on making a Fritzing part over on Hackaday.io. [Arsenijs] did this because I put up a bounty for the first guide to making a part in Fritzing from scratch. Not only do I contribute to Open Source (which means I’m better than you), I contribute to Open Source documentation. I am a unicorn that lays golden eggs.
That’s it for Fritzing. I’m not touching it again. For the next post in this Creating a PCB in Everything series, I’m going to take a look at the cloud-based PCB design tool, Upverter. Will it be better than Fritzing? Who knows. Maybe. Probably.
Wind back the clock to 1971. Jane, a freshly minted college graduate, joins the government as a clerk. Jane’s work consists largely of entering information into databases and creating reports, which requires her to spend the better part of her work day seated at a terminal near a mainframe computer that fills an entire room. Jane and her colleagues are expected to be at their desks from 9 a.m. to 5 p.m., five days a week. Jane is grateful to have a steady 9-to-5 job, and plans to spend her entire career with her agency.
Flash forward 40 years and meet Jane’s grandson, Ian. He carries a slim tablet wherever he goes, which has more computing power than the mainframe with which Jane worked. Ian is constantly tethered to the Internet and works 24/7, from wherever he is. Ian expects to switch from project to project and office to office as his career develops and his interests evolve. If he feels he has reached the limit of his ability to learn or grow in one role, he will look elsewhere for a new opportunity. What if the government could give Ian the opportunities and experiences he seeks?
The GovCloud concept proposed in this paper would restructure government workforces in a way that takes advantage of the talents and preferences of workers like Ian, who are entering the workforce today. The model is based on a large body of research, from interviews with public and private sector experts to best practices from innovative organizations both public and private.
“This is the first generation of people that work, play, think, and learn differently than their parents… They are the first generation to not be afraid of technology. It’s like the air to them.”
— Don Tapscott, author of Grown Up Digital
This report details trends in work and technology that offer significant opportunities for improving the efficiency and effectiveness of the government workforce. It lays out the GovCloud model, explaining how governments could be organized to take advantage of its flexibility. It examines how work would be performed in the new model and discusses potential changes to government HR programs to support GovCloud. Other sections provide resources for executives, including a tool to help determine cloud eligibility, steps they can take to pilot the cloud concept, and future scenarios illustrating the cloud in action.
The GovCloud model represents a dramatic departure from the status quo. It is bound to be greeted with some skepticism. Without such innovation, however, governments will be left to confront the challenges of tomorrow with the workforce structure of yesterday. The details of the GovCloud model are open for debate. The purpose of this paper is to jumpstart that debate.
Forty years ago, more than half of employed American adults worked in either blue-collar or clerical jobs. Today, less than 40 percent work in these same categories, and the share continues to shrink.1 Jobs requiring routine or manual tasks are disappearing, while those requiring complex communication skills and expert thinking are becoming the norm.2 Increasingly, employers seek workers capable of creative and knowledge-based work.
“We should ask ourselves whether we’re truly satisfied with the status quo. Are our workday lives so fulfilling, and our organizations so boundlessly capable, that it’s now pointless to long for something better?”
— Gary Hamel, author of The Future of Management
The next generation of creative knowledge workers has already entered the job market. These “Millennials” came of age in a rapidly and radically changing world. They are the first true digital “natives.” They have grown up with instant access to information through technology. As such, Millennials have considerably different expectations for the kind of work they do and the information they use. The pursuit for variety in work has led Millennials to cite simply “needing a change” as their top reason for switching jobs.3
Advances in technology have also changed the real ways in which people perform work. The ability to crowdsource tasks is one example of this change. Since its founding in 2001, volunteers have produced and contributed to over 19 million articles in 281 languages on Wikipedia.4 Built around this concept, a burgeoning industry is developing around “microtasking,” dividing work up into small tasks that can be farmed out to workers. Amazon’s Mechanical Turk, rolled out in 2005, allows users to post tasks to a platform where registered workers can accept and complete them for a small fee. When this paper was written, more than 195,000 tasks were available on Mechanical Turk.5
Such technologies may offer suitable possibilities for the public sector. Microtask, a Finnish cloud labor company, maintains Digitalkoot, a program that helps the Finnish National Library convert its image archives into digital text and correct existing errors. It does so with volunteered labor; participants simply play a game in which they are shown the image of a word and then must type it out to help a cartoon character cross a bridge. In doing so, they are turning scanned images into searchable text, greatly improving the search accuracy of old manuscripts.6 At present, more than 100,000 people have completed over 6 million microtasks associated with this project.7
As the pace of computing power and machine learning increases, professors Frank Levy and Richard Murnane contend that more tasks will move from human to computer processing.8 Skeptics need look no further than IBM’s Watson, a computer that can answer questions posed in natural language. In February 2011, Watson defeated two all-time champions of the quiz show Jeopardy! This was not solely a publicity stunt; IBM hopes to sell Watson to hospitals and call centers to help them answer questions from the public.9
Around the globe, more and more governments are looking to increase telework among employees. In 2010, the U.S. government passed legislation calling for more telework opportunities for government employees. Likewise, the Australian government, in order to attract and retain information and communications technology workers, instituted a teleworking policy in 2009 requiring agencies to implement flexible work plans.10 Other countries, including Norway and Germany, are also focusing on flexible work arrangements to Excellerate public sector recruiting.11 In Canada, the government has an official telework policy that recognizes “changes are occurring in the public service workforce with a shift towards more knowledge workers,” and “encourages departments to implement telework arrangements.”12
Cloud computing: “Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on-demand, like electricity.”
Crowdsourcing: “Neologistic compound of crowd and outsourcing for the act of taking tasks traditionally performed by an employee or contractor, and outsourcing them to a group of people or community, through an “open call” to a large group of people (a crowd) asking for contributions.”
GovCloud: “A new model for government based on team collaboration, whereby workforce resources can be surged to provide services to government agencies on-demand.”
Source: Frank Levy and Richard J. Murnane, The New Division of Labor: How Computers are Creating the Next Job Market, (Princeton: Princeton University Press, 2004), p. 50.
Figure 1: Trends in routine and non-routine tasks in the U.S. 1960-200213
These are all powerful steps in the right direction for employees whose natural work rhythms are not locked into “9 to 5.” Some companies have taken telework one step further. British Telecom is pushing the concept of “agile working” through its Workstyle Project, where employees decide what work arrangements best suit them—rather than a rigid definition by location and hours. BT Workstyle is one of the largest flexible working projects in Europe, with over 11,000 home-based workers. BT has found that its “home-enabled” employees are, on average, 20 percent more productive than their office-based colleagues.14
Similarly, U.S. electronics retailer Best Buy experimented with a “Results Only Work Environment” (ROWE). In a ROWE, what matters is not whether employees are in their office, but rather that they complete their work and achieve measurable outcomes. In a ROWE, salaried employees must put in as much time as is actually needed to do their work—no more, no less.
The decline in routine and manual tasks and the rise of new ways of working is not isolated to the private sector. In 1950, the U.S. federal workforce largely comprised clerks performing repetitive tasks. About 62 percent performed these tasks, while only 11 percent performed more “white-collar” work. By 2000, those relationships were reversed. Fifteen percent performed repetitive tasks, compared to 56 percent in the white-collar categories.16 Similarly, in 1944, the number of workers in the UK civil service considered “industrial” totaled 505,000. By 2003, this number fell to 18,200, with “non-industrial” workers reaching 538,000 in 2004.17 And in Canada, in 2006, knowledge-based workers represented 58 percent of federal workers in the Core Public Administration, up from 41 percent 11 years earlier.18
The swelling ranks of “non-industrial” government workers indicate a shift in public sector jobs toward creative, collaborative, and complex work. The workforce structure, however, designed for clerks of the last century, remains largely the same. With limited flexibility to distribute resources, governments often address change by creating new agencies and programs. This can be seen following major events like the outbreaks of the Avian flu and SARs in the past decade, 9/11, and the financial crisis of 2008.
Source: United States Office of Personnel Management, A Fresh Start for Federal Pay: A Case for Modernization (April 2002), p. 5. http://www.opm.gov/strategiccomp/whtpaper.pdf
Figure 2: The changing U.S. federal workforce 1950–200015
Given increasing budgetary pressures and burgeoning national debts, the conventional model of creating new agencies or permanent structures in response to new challenges is unsustainable. This is exacerbated by our inability to accurately predict future needs and trends. Consider a 1968 Business Week article proclaiming that “the Japanese auto industry isn’t likely to carve out a big share of the market for itself,” or the president of Digital Equipment Corporation, who in 1977 said, “[t]here is no reason anyone would want a computer in their home.”19
The world is full of experts who attempt to predict the future—and fail.20
Instead of endeavoring to predict the future, governments can choose to create a flexible workforce that can quickly adapt to future work requirements. To accomplish this, the government can learn from a game-changing concept in the technology world: cloud computing.
Major organizations and small startups alike increase their flexibility by sharing storage space, information, and resources in a “cloud,” allowing them to quickly scale resources up and down as needed. Why not apply the cloud model to people? The creation of a government-wide human cloud could provide significant benefits, including:
A cloud-based government workforce or “GovCloud” could include workers who perform a range of creative, problem-focused work. Rather than being slotted into any single government agency, cloud workers would be true government-wide employees.
This section outlines the organizational structure of the GovCloud model, which rests on three main pillars: a cloud of government workers, thin executive agencies, and shared services.
Most government workforce models tend to constrain workers by isolating them in separate agencies.
Consider the 2001 outbreak of foot-and-mouth disease in the United Kingdom and the subsequent slaughter of more than 6 million pigs, sheep, and cattle. The problem of an impacted food supply is complicated. In most countries, multiple agencies focus on agriculture, food production, and public health. In the United Kingdom, the army and even tourism ministries were impacted by the outbreak as agencies became overwhelmed by the number of animals in need of disposal and by the cordoning off of tourist areas to prevent the spread of the disease.Yet the structure of government agencies often confines employees to work in information silos, creating inherent operational inefficiencies. In a cloud workforce model, experts in each area could be pulled together to support remedies and propose coordinated corrective measures.
“I want someone saying: ‘Did you know that the Ministry of Justice is doing that, or could you piggy-back on what the communities department is doing, or had you thought about doing it in this way?’ You’ve got to get away from thinking about centralized command and control.”
— Dame Helen Ghosh, Permanent Secretary, UK Home Office 21
The FedCloud model
The GovCloud model could become a new pillar of government, comprising permanent employees who undertake a wide variety of creative, problem-focused work. As needed, the GovCloud model could also take advantage of those outside government, including citizens looking for extra part-time work, full-time contractors, and individual consultants.
Cloud workers would vary in background and expertise but would exhibit traits of “free-agent” workers—self-sufficient, self-motivated employees who exhibit a strong loyalty to teams, colleagues, and clients. Daniel Pink, author of Drive: The Surprising Truth About What Motivates Us, argues that 33 million Americans—one-quarter of the workforce— already operate as free agents.22
According to the white paper, “Lessons of the Great Recession,” from Swiss staffing company Adecco, contingent workers—those who chose non-traditional employment arrangements23—are expected to eventually make up about 25 percent of the global workforce.24These more autonomous workers, according to Pink, are better suited to 21st-century work, and are more productive—even without traditional monetary incentives.25
The fluid nature of the cloud can provide significant benefits:
The nature of the cloud—teams forming and dissolving as their tasks require—encourages workers to focus on specific project outcomes rather than ongoing operations.
Thin agency structures could lead to:
Greater use of shared services could allow the federal government to:
The need to support some ongoing missions will remain, of course. These missions will be carried out by thin agencies.
Under the cloud concept, federal agencies would remain focused on specific missions and ongoing oversight. These agencies, however, would become “thinner” as many of their knowledge workers transfer into the cloud. Thin agencies could also create opportunities to streamline organizations with overlapping missions.
Employees working in thin agencies could fall into two main categories:
GovCloud could change the highest levels of public sector workers as we know them today. The Senior Executive Service in the United States, Permanent Secretaries and Directors General in the United Kingdom and Australia—all such senior officials could rotate between agencies, shared services, and the cloud, which would reflect the original intent behind many of these high-level offices: giving executives a breadth of experience in roles across government to help develop shared values and a broad perspective. An important benefit of rotation would be the ability to tap into cloud networks to assemble high-performing teams.
To further focus agencies on specific missions, many of their back-office support functions could be pulled into government-wide shared service arrangements.
The use of shared services in government has come and gone in waves—usually dictated by fiscal necessity. Most countries in Europe, as part of their e-government strategies, have placed increased focus of late on developing shared services, whether through an executive agency or a CIO, as well as working with EU coordination activities. And while the decentralized governments of some EU countries—such as Germany—make shared services more difficult, these countries are using states and agencies to pilot innovative approaches.27
Other efforts around the world include the U.S e-Government Act of 2002, which examined how technology could be used to cut costs and Excellerate services. More recently, the New Zealand government appointed an advisory group in May 2011 to explore public sector reform to Excellerate services and provide better value. In their report, “Better Public Services,” the advisory group recommended the use of shared services to Excellerate effectiveness in a variety of government settings, including policy advice and real estate.29 Following up on this, three New Zealand agencies—the Department of the Prime Minister, the State Services Commission, and the Treasury—announced in December 2011 that they would share such corporate functions as human resources and information technology.30 And though shared services in Western Australia were shut down, other projects in South Australia are moving ahead and already showing savings.31
In August 2011, the government of Canada announced the launch of Shared Services Canada, a program that seeks to streamline and identify savings in information technology. Among its first targets is something as mundane as email. But with more than 100 different email systems being used by government employees, the potential savings and boost to efficiency could be significant. Not only do these incompatible systems cost money by requiring individual departments to negotiate and maintain separate licenses and technical support, it also makes it difficult for government employees to communicate with one another and with the public. And with no single standard, ensuring the security of information transmitted over email becomes more challenging. Shared Services Canada will move the government to one email system as well as consolidate data centers and networks—ultimately looking for anticipated savings of between CA$100 million and CA$200 million annually.28
While the idea of using shared services is not a novel one, it is central to the GovCloud model. The GovCloud model envisions building upon effective practices and those shared services already in operation to deliver services like human resources, information technology, finance, and acquisitions government-wide. Workers in these shared services would include subject matter experts in areas like human resources and information technology, as well as generalists, who support routine business functions.
The potential for shared services continues to grow. As seen with IBM’s Watson and Microtask’s Digitalkoot, new technologies provide an opportunity to accelerate the automated delivery of basic services. Some agencies already have begun capitalizing on these trends. For example, NASA has moved its shared service center website to a secure government cloud, facilitating greater employee self-service and helping to reduce demand on finite call center resources.32
This decision tool is designed to help leaders determine which employees are appropriate for each of the three structures in the GovCloud model—the cloud, thin agencies, and shared services.
To the cloud…
GovCloud Project Lifecycle
Managing employees in the cloud will require governments to reinvent human resource management. Individual and team performance evaluations, career development, pay structures, and benefits and pensions would need to change to support GovCloud. This section examines possibilities for HR reinvention, including performance management, career development, workplace flexibility, and benefits.
Employees working in the cloud would require an alternative to determine pay and career advancement. The government could take its cues from the gaming world and evaluate cloud workers with a point system.
“The manager as we know it will disappear— to be replaced by a new sort of business operative whose expertise is assembling the right people for particular projects.”
—Daniel Pink, author of Free-Agent Nation 33
An HR management system that incorporates the accumulation of experience points (XP) through effective work on cloud projects, training, education, and professional certifications could replace the tenure-centric models for cloud employees.
As employees accumulate XP, they could “level up” and take on additional responsibilities in future projects. Workers in the cloud could earn XP in four ways:
After some high-profile incidents—slow responses to outbreaks of foot-and-mouth disease, flooding that may have been preventable, and a farming subsidy system that seemed to result in more chaos than aid—the UK Department for Environment, Food and Rural Affairs (DEFRA) was looking to reinvent itself. In 2006, the department launched DEFRA Renew. One of its key goals was to bring the department’s policymaking closer to real delivery to create more responsive processes.
Organized, mainly by policy, with fixed teams, DEFRA had been unable to redeploy resources as needed in response to a crisis. As part of DEFRA Renew, a new operating model was implemented that used flexible resourcing where staff were assigned to specific projects for fixed periods. This allowed management to measure and build the required capabilities and competencies needed and to allocate resources efficiently to Excellerate overall service quality. New roles were also created to support sustainable staff development and resource management in the new model.
To create buy-in for such a fundamental culture shift within the department, a facilitative approach to decision-making was employed. Change management programs and mentoring were extended to all levels of the department, including leadership. New mechanisms—such as approval panels for resources and the use of business cases—also worked to push changes among staff and promote collaborative behavior.
DEFRA Renew was widely recognized as a key enabler in the department meeting required efficiency improvement targets set by the UK government. DEFRA moved to a more project-based approach, with fewer staff in core teams. According to Dame Helen Ghosh, former Permanent Secretary of DEFRA, they could be more responsive now that “the management board won’t be made up of director generals with individual policy silos.”34
Just as XP could be gained through learning new skills, it could be lost in the following three ways:
Any serious discussion about creating a new class of government employees requires a fresh look at employee benefits and compensation. For example, XP could be used to help determine workers’ salaries, but additional research into alternative pension and benefit programs is needed. While any discussion on compensation could be contentious, a healthy debate among stakeholders from across the government should be welcomed.
As new roles emerge in the cloud, so too could new career paths. Career emphasis could move away from time served in a particular pay grade and toward milestones that are meaningful for employee development.
Each worker may have different career aspirations. For instance, not all workers aspire to management; some may seek to master a particular subject area instead. Career advancement in the cloud would not equate to moving up a ladder, but rather moving along a lattice.
Lattice GovCloud Model
Here’s how the lattice could work for Ian, who we met in the introduction.
“Think of the lattice as a jungle gym. The best opportunities to broaden your experience may be lateral or even down. Look every which way and swing to opportunities.”
— Pattie Sellers, Fortune editor at large
Cathleen Benko and Molly Anderson, the authors of The Corporate Lattice, argue that the corporate ladder is giving way to a lattice that accommodates flatter, more networked organizations; improves the integration of career and life; focuses on competencies rather than tenure; and helps increase workforce loyalty.35 The lattice metaphor allows employees to choose many ways to “get ahead.”
It is unlikely that all workers will thrive in the new GovCloud environment right out of the gate. As such, it would be important to assess a worker’s readiness before placing her in GovCloud and providing training on core competencies critical to cloud success. There could also be opportunities to start workers, especially those at earlier stages of a career, within an agency or shared service to build up expertise in some area before “graduating to the cloud.” Once in the cloud, new workers could be paired with mentors, who are more experienced, to help navigate the cloud experience itself.
There should be an emphasis on continuous learning in the cloud. It would be important for cloud workers to continue to refine their skills, develop additional expertise, and adapt to new ways of working. Not only could continuous learning affect workers’ career mobility by increasing the depth and breadth of their skills, but it could also impact their salary and level by increasing their XP.
Learning and development in the cloud could take on many themes of “next learning.” Next learning focuses on creating personalized learning experiences that leverage the latest technologies and collaborative communities to deliver education and learning programs that build knowledge bases and promote learning as a focus and passion, not just a checkbox in a career.36
To broaden cloud worker skills and the ability to handle multiple tasks and work on a variety of projects, cloud learning could include the following principles:37
In the cloud, careers and expertise will be built in new ways and work will be something we do, rather than a place we go to. As such, the cloud will give workers more control over their schedules and workloads. By creating a flexible workplace, governments could shed a significant amount of physical infrastructure and create shared workspaces. Many buildings could be converted into co-located spaces; teams could use collaboration spaces or videoconferencing centers.
Some workers might rarely set foot in a government building, instead conducting cloud tasks at home and interacting with project teams virtually. With advancing communications and mobile technology, distance no longer hinders collaboration. It no longer matters whether all workers are at an office between 9 a.m. and 5 p.m.; what matters is whether project teams produce results and whether everyone contributes.
A more flexible workplace could also take advantage of resources governments might not otherwise have access to. Some retiring workers may not want to quit working altogether, and a flexible model could be an enticing way to keep their expertise on retainer. Alternatively, the model could take advantage of would-be government employees unwilling to relocate or unable to work a regular schedule. By increasing flexibility, governments could increase their available resource pool, allowing agencies to access the skills and knowledge they need, when they need it. For an example of how a retiree could interact with the cloud, see Appendix C: National Security Case Study.
Don’t think governments will ever take to the cloud? At the U.S. Department of State, the idea could soon be a reality. The Office of eDiplomacy is preparing to pilot a cloud component to its e-internship model for American students as part of the Virtual Student Foreign Service (VSFS), beginning this year. The VSFS currently offers e-internships to U.S. university students of multiple month duration. By using a new micro-volunteering platform, State Department offices and embassies around the world will be able to create non-classified tasks that take anywhere from a couple of minutes to a couple of days to complete. Each task will be tagged by region and/or issue and will automatically populate the profiles of students who have indicated those interests. Students can then select the tasks that interest them the most or that fit into their schedule.
To see that the most pressing work is performed first, offices and embassies will be able to prioritize their tasks, so critical items appear at the top of the queue. Imagine a small embassy preparing for a high-profile, multilateral meeting. The preparations for such an event could be daunting for a small staff. The power of the cloud could augment an individual embassy’s capacity to prepare for a major event and ensure that related items are performed ahead of those that are less critical.
While there are plenty of incentives for participating in the VSFS micro-volunteering platform—from an impressive line on a student’s resume to the chance to make a difference by working on Topics of interest—thought is being put into how to creatively incent high performance. One idea is to simply invoke students’ competitive spirit. Competition could be encouraged by a monthly leader board, which results in bragging rights and, potentially, even a low-cost, but high-impact reward. Transparency is also key to competition: with ratings available to State Department staff and other cloud interns and the ability to make short thank you notes from embassies publicly available, interns would be keen to make a good impression.
The potential applications of this type of program are significant. Imagine if offices throughout the State Department could tap into the language and cultural expertise of the thousands of foreign national staff members around the globe. Providing a platform for those employees to contribute even a small amount of time to discreet tasks that require their expertise could unlock a world of knowledge.38
Creating the GovCloud model will require bold leadership and the ideas and initiatives of entrepreneurial executives. While a GovCloud model may be years in the making, agencies can begin adopting cloud concepts today.
The GovCloud concept is designed to be versatile as well as applicable to a wide range of entities. Depending on your organization, government executives wishing to employ GovCloud may choose to apply the concept first to a unit, before expanding to other branches or divisions, entire agencies, or the whole of government.
Often, GovCloud principles are most effectively implemented as part of a larger reform program within a particular agency—as with the UK Department for Environment, Food and Rural Affairs’ Renew program, as described earlier in this report. On a smaller scale, the UK Cabinet Office used flexible resourcing (FR) in its Economic Reform Group (ERG), with a staff of about 400, as part of its cost-reduction plans. Using a simple database that it had developed and a strong program of communications, FR is now used and embraced by all core ERG employees, with strong, clear ownership from the top—another key implementation factor. Says Ian Watmore, the UK Cabinet Office’s former permanent secretary, FR means “we are able to deploy people much more quickly to priority projects.”40
Figure 3 outlines how GovCloud can apply to a variety of organizations.
Most government workforces haven’t undergone a broad restructuring in decades. In that time, the world has been transformed by computers, the Internet, and mobile communications.
To respond to a variety of challenges, governments have created scores of new organizations. However, in today’s world of budget cuts and increased fiscal scrutiny, the constant creation of new, permanent structures is not sustainable.
The GovCloud model could offer a new way to use government resources. A cloud of government-wide workers could coalesce into project-based teams to solve problems and separate when their work is done. This could allow governments to concentrate resources when and where they are needed. By using this model in conjunction with thinner agencies and shared services, governments can reduce back-office redundancies and let agencies focus on their core missions.
This model capitalizes on the work preferences of Millennials—the future government workforce—who value career growth over job security or compensation.41 The GovCloud model allows employees to gain a variety of experiences in a shorter amount of time and to self-select their career direction.
To support GovCloud, governments could establish the processes by which cloud teams would form, work, and dissolve. New ways to evaluate performance and help workers gain skills and build careers should be considered. Today’s employee classification system stresses job descriptions and time in service; this could be transformed with an XP model that emphasizes the individual’s ownership of his or her career.
The GovCloud model will undoubtedly be controversial. Many stakeholders, from governing bodies to public employee unions, must weigh in to shape the future government workforce. The transition to a cloud model will not happen overnight or maybe even in the next five years, but the conversation starts today.
Charlie Tierney is a Manager in Deloitte Consulting LLP’s Federal Human Capital Practice and a former GovLab Fellow. He has served clients in the intelligence community. He graduated from the University of Kansas with a BA in Chinese History and minor in Mandarin, and is currently pursuing his Masters in Business Administration at the University of Maryland’s Smith School of Business.
Steve Cottle was a GovLab Fellow and a Senior Consultant in Deloitte Consulting LLP’s Federal Strategy & Operations practice. There, he served multiple clients within the Department of Homeland Security. Steve graduated from Boston College with a BA in International Studies and German and received a Fulbright Grant to study international security in Germany. Steve is currently pursuing his Masters in Public Policy at the Georgetown Public Policy Institute.
Katie Jorgensen was a GovLab Fellow and Consultant in Deloitte Consulting LLP’s Federal Strategy & Operations practice. There, she served multiple clients in the Federal Railroad Administration and Transportation Security Administration. Katie received her BA in American Studies from Georgetown University. Katie is currently pursuing her Masters in Business Administration at Duke University’s Fuqua School of Business.
Originally published by Deloitte University Press on dupress.com. Copyright 2015 Deloitte Development LLC.
If you are invested in broad market indexes, ETFs (exchange-traded funds), or even individual stocks, there is no way to avoid the ups and downs of the market, sometimes to the extreme. Most folks, especially conservative investors, detest the roller-coaster ride of the stock market. Moreover, it is not even good for their wealth-building goals, as we know that high volatility generally results in relatively poor performance (unless you are buying at regular intervals, like each pay period). Also, income investors, including retirees who live off their investments, have a different problem. Since they need to withdraw income on a regular basis, they may be forced to sell when the prices are low and making such losses permanent. Even if they do not withdraw income, more than likely, they are not adding any fresh money to take advantage of low prices. Retirees can also face the problem of sequential risk if the market happens to go into a deep correction for multiple years in the early phase of retirement.
So, how can we ensure consistent income without drawing down our portfolio and conserving the capital? In fact, we framed our goals of an ideal portfolio around these problems. Here they are:
Capture at least 80 to 90% of the upside of the markets during the bull runs.
Avoid the worst of deep corrections and panics, and preserve capital to a large extent.
Match or exceed the market returns on a long-term basis.
Generate at least 5% income that can be withdrawn/used if needed.
With these goals, a few years ago, we introduced a portfolio concept (or strategy) in our articles on SA that we like to call NPP (Near-Perfect Portfolio) strategy. To some, the name may appear to be a bit over the top, but the underlying premise and goals fit the name. Generally speaking, we're looking for a strategy that performs reasonably well during the bull markets, preserves capital when the market throws a fit or performs poorly, and provides a decent enough income stream on a consistent basis. This is the basis of the Near-Perfect Portfolio strategy.
In our view, the following type of investors should find the NPP strategy highly useful:
An income investor who does not like the roller-coaster ride of the stock market and who would rather sleep well even during the depth of a correction.
A retiree or a near-retiree who would want to avoid the sequential risk of the stock market but at the same time will like to stay fully invested.
Anyone who believes that low volatility leads to higher returns in the long-term (and vice versa), and especially folks who are nearing 50 or already 50+.
We follow the NPP strategy in our Marketplace service, "High-Income DIY," but from time to time, we provide updates and progress of the NPP portfolio here on SA public platform as well. As usual, we will provide an overview of the live performance of the NPP portfolio vis-à-vis the S&P500 for the last 30 months since January 2020, in addition to back-testing performance going back to the year 2008. In the last section, we lay out how to structure a new portfolio based on the NPP strategy.
The past 3 to 4 years have seen an incredible roller-coaster ride for the broader market. Going back to the year 2019 and early 2020, we witnessed a continued and strong bull market. However, the 11-year long bull-market was interrupted suddenly and violently by the once-in-a-century pandemic. Very few investors could see that coming. However, fortunately, the correction was very short-lived, and the next phase of the bull market started that took the S&P500 as high as 4790.
However, this year, markets have faced very strong headwinds. At one point, the S&P500 lost nearly 23% from its most accurate peak (though it has come back a little since then), and a vast number of individual stocks have lost even more. The biggest threat to the economy (and hence the stock market) that has emerged is the 40-year high inflation that shows few signs of abating any time soon. That has pushed the Fed from being dovish to extremely hawkish in a matter of few months. The current geopolitical situation and a prolonged war in eastern Europe have only made the situation worse. Inflation is the biggest threat to retirees and folks who are on a fixed income and their stock portfolios. Now, with rapidly increasing interest rates and an increasingly hawkish Fed, we are a step closer to a possible recession. Can we avoid the recession? Maybe or maybe not. But the market direction is likely to remain murky and volatile for the next few months, if not longer.
With all this turmoil going on in the market, it basically boils down to one simple question. Do we really want to constantly go through these ups and downs of the stock market and worry about the value of our portfolio on a daily or weekly basis? Or is there a better alternative, where a conservative investor or a retiree could have lower volatility, minimal drawdowns, and consistent income but still could enjoy the fruits of a rising market as and when that occurs? This is exactly the objective of this comparative analysis of the NPP strategy and the broader market indexes like the S&P 500.
Here's some background. No portfolio can be perfect because it cannot meet all of its stated objectives in every situation or all the time. However, even if we could meet 80% of our objectives, 80% of the time, we should do pretty well. Also, we are not aiming to outperform or beat the market but meeting our pre-determined goals and expectations. If the backtesting results are any indication, chances are that we might just beat the market as well. Of course, that will be icing on the cake.
The NPP Strategy is a combination of three investment baskets with unique and diverse sub-strategies. The combined strategy aims to achieve the following goals and objectives:
Preserve capital by limiting the drawdowns to less than 20%.
Provide a consistent income of roughly 5% to those who need to withdraw.
Grow the capital for the long term at an annualized rate of 10% or better (including the income).
Strive to take the stress out of investing by providing low volatility, low drawdowns, consistent income, and SWAN (sleep well at night) like characteristics.
We must caution that these strategies need some work on an ongoing basis and may not suit highly passive investors. In addition, they require patience.
Before we go any further, it may be beneficial to discuss how our rotational and buy-and-hold portfolios would have performed since the year 2008. Moreover, this will demonstrate how rotational portfolios can act as a counterbalance to buy-and-hold portfolios during times of crisis. We run and manage many such rotational portfolios and three buy-and-hold portfolios inside our Marketplace service.
Note: All the tables and charts included in this article are sourced from Author's work unless specified otherwise underneath the image. The stock market data, wherever used, is sourced from public websites like Yahoo Finance, Google Finance, Morningstar, etc.
Let's talk about drawdowns a little bit. During the good times (bull runs), it is natural that most folks do not think much about drawdowns. However, for retirees and older investors, it is of paramount importance to know their risk tolerance and have a realistic idea about how much of a drawdown would be tolerable to them. In addition to the level of tolerance to drawdowns, there is always the inherent risk of negative sequential returns during the early years of retirement. So, if you think that in a worst-case scenario, you could only tolerate a 20-25% drawdown, then you should not be invested in broad market indexes. Broad market indexes like S&P 500 routinely have a drawdown (loss from top to bottom) of over 30% or even 50% (examples include the dot-com crash of 2000-2002, the financial crisis of 2008-2009, and the Covid crash of 2020). Sometimes a downturn can end quickly, but at other times it can be long, slow, and painful.
So, let's compare the drawdown performance of our NPP strategy and the S&P 500 during some of the worst times during the last 14 years (based on backtesting). We are not in a position to go beyond 2008 due to a lack of reliable data, but at least it would cover the financial crisis and a few other deep correction periods.
• Jan-2008 to Mar-2009 (Financial and Housing crash)
• Oct-2018 to Dec-2018 (Crash of 2018)
• Jan-2020 to Mar-2020 (Pandemic crash)
• Jan-2022 to July 22, 2022 (the current period)
As you can see above, the drawdown of the NPP portfolio was less than one-third of the S&P 500 during an extreme downturn and nearly half (or less than half) at most other times. The current period is still an ongoing affair, and so the complete picture is not available as yet.
Below is the combined NPP portfolio performance (based on backtesting results from 2008 until July 22-2022). We will then provide more details on the three components of the NPP strategy.
As of July 22, 2022:
CAGR of DGI bucket since 2008:
CAGR of CEF-High-Income bucket since 2008:
CAGR of ROTATION bucket since 2008:
CAGR of Combined NPP strategy since 2008:
CAGR of S&P 500 since 2008:
** CAGR - Compound Annual Growth Rate
(If no Income was withdrawn)
The chart below is the same as above, except that 6% (inflation-adjusted) income is withdrawn every year. It is very obvious that in terms of growth of capital, S&P 500 did a terrible job in spite of the fact that it has performed very well during the last 13 years. The reason was the huge drawdown right in the first year. This demonstrates very clearly the danger of risk of sequence returns mentioned earlier. However, due to very limited drawdowns, the NPP strategy balance grew very nicely.
(If 6% Inflation-adjusted Income was withdrawn)
Below is the chart from the "live performance" of the NPP Portfolio since Jan. 2020. (Since this comes from our Marketplace service, it takes into account 6 Rotational portfolios and 3 Buy-and-hold portfolios). In practice, an investor would just need two buy-and-hold buckets and one (or maybe two) rotational buckets. In the chart below, you will notice that at the market bottom in March 2020, S&P 500 and Dow Jones were down nearly 30% and 35%, respectively; however, NPP was down only about 15%.
For readers, who are new to this strategy, we will present an actionable plan on how to start a 3-basket NPP portfolio. Sure, we will encourage readers to carefully analyze and do their due diligence and judge for themselves if the strategy suits their personal situation. We provide below a sample NPP portfolio, complete with its three components. At times, there may be some repetition, but we feel new readers could benefit from this greatly.
The idea here is to provide the basic framework. You do not have to follow the strategy exactly as it's laid out here; rather, use these ideas in a manner that suits your needs based on your own unique situation. For example, the younger and more aggressive investors should include a fourth bucket for "Technology and Innovation" stocks, allocating 10% to 25% of the portfolio capital. However, the most conservative investors could instead use this fourth bucket as a cash-like investment.
It takes time to build confidence and conviction in any new strategy. So it's highly recommended that one should move to any new strategy on a gradual basis over a period of time by adding in small lots rather than all in one go.
We would outline below a portfolio of three buckets if someone was to invest today. The fourth bucket would be optional based on the individual situation and thus not included here.
We believe that a diversified DGI (Dividend Growth Investing) portfolio should hold roughly 15-25 stocks. However, more passive investors, who do not have time or interest to manage individual stocks, could make this portfolio entirely of some select dividend ETFs (Exchange Traded Funds). For our sample portfolio presented below, we looked for companies that are large-cap, relatively safe, and have solid dividend records. Based on our previous work, we believe many of these stocks will likely provide a high level of resistance to downward pressure in an outright panic situation. In addition, we included two stocks that are providing an above-average yield and will lift the overall portfolio yield. We will present 15 such stocks with their current dividend payouts.
Long-term investments 3%-4% dividend income
Long-term total return in line with the broader market
Drawdowns to be about 65%-70% of the broader market
In this bucket, we will invest roughly 35%-40% of the total investable funds. It will be our core investments in solid, blue-chip dividend stocks. It's relatively easy to structure and form this bucket. However, we must put emphasis on diversifying among various sectors and industry segments of the economy. A selection of roughly 15-25 stocks could provide more than enough diversification.
For this part of the portfolio, our focus is to select stocks that tend to do reasonably well in both good times and during recessions/corrections. This is especially important if you are a retiree.
AbbVie Inc. (ABBV), Amgen (AMGN), Clorox (CLX), Digital Realty (DLR), Enbridge (ENB), Fastenal (FAST), Home Depot (HD), Johnson & Johnson (JNJ), Kimberly-Clark (KMB), Lockheed Martin (LMT), McDonald's (MCD), Altria (MO), NextEra Energy (NEE), Texas Instruments (TXN), and Verizon (VZ).
The average yield from this group of 15 stocks is very respectable at 3.62% compared to 1.5% from S&P500. If you still have some years before retirement, reinvesting the dividends for a few years would take the yield on cost up to 4% easily.
So, what's a Rotation strategy, and why invest in it? First, this is our insurance bucket (or hedging bucket), which should preserve our capital in times of crisis or panic. In addition, it would reduce volatility, provide a decent return, and could provide a good income as well.
Along with the DGI portfolio, these strategies are an essential part of our overall portfolio. Investment in stocks is inherently risky, and the Rotational strategies provide the necessary hedge against the risk. They bring the overall volatility of the portfolio down and limit the drawdowns in a panic or a major correction scenario. The biggest advantage is that they let the investor sleep well at night. They bring a level of assurance that helps the investor to maintain calm and stay invested in good times and bad.
However, we must caution that these strategies require some regular work on a monthly basis. One can start with one rotation strategy, but eventually, one should invest in at least two rotational strategies. As one gains more experience and confidence, one could diversify in multiple strategies. We provide eight such strategies in our Marketplace service to suit a wider audience.
Note: A word of caution for new investors - just because we're allocating 40% of the portfolio to this strategy, we are not recommending that you change to this strategy overnight with large sums of money. Rather, it should be done gradually over time and in multiple lots. There are two benefits: first, you need time to gain confidence and have a conviction on the new strategy. Second, gradual deployment will avoid any whipsaws or reversals in the market.
In the Rotational bucket, we normally rotate between a fixed set of securities on a periodic basis (usually a month), based on the relative performance of each security during the previous period of defined length.
This portfolio is designed in such a way that it aims to preserve capital with minimal drawdowns during corrections and panic situations while providing excellent returns during bull periods. Due to much lower volatility, this portfolio is likely to outperform the S&P 500 over long periods of time. However, please note that it may underperform to some extent during the bull runs. It can also underperform in some years due to frequent whipsaws.
The strategy is based on eight diverse securities but will hold only two of them at any given time, based on relative positive momentum over the previous three months. Basically, we will select the two top-performing funds. The rotation will be on a monthly basis. The eight securities are:
Note: TBF did not have a history prior to 2010, so it was excluded for the years 2008 and 2009.
It is a huge challenge for retirees to generate decent income while not risking their capital. Sure, the dividend stocks can generate roughly 3% income (at time 4%) relatively safely, but that may not be enough for everyone to meet their livable expenses. If one has a large enough capital, for example, $2 million or $3 million, even a 3% dividend income could generate a decent income. But we are talking of investors who have $1 million or less. How can they generate large enough income without risking the capital? This is where our NPP strategy invests one of its buckets in high-income securities like Closed-end funds. We recognize that this is a relatively high-risk bucket. That's why we recommend no more than 15%-25% allocation to this bucket. However, this bucket gets most of the income with limited capital risk.
For high income, one has to essentially look at investment vehicles like REITs (Real Estate Investment Trusts), mREITs (mortgage REITs), BDCs (Business Development Companies), MLPs (Master Limited Partnerships), and CEFs (Closed-End Funds).
For this income bucket, we need to be highly selective and choose only the best of the best funds in each of the respective asset classes. Also, one should consider this part of the portfolio as a sort of "annuity" subset of the overall portfolio. But, in our opinion, these investments are a lot better than annuities in many respects. This portfolio gives a kind of assured high level of income and is still likely to grow better than the rate of inflation over a long period of time. More importantly, annuities usually leave nothing for the investor's heir, whereas this portfolio could be fully passed on to heirs.
We present here a set of 11 high-income investment funds. However, one of them is an individual company stock and an MLP.
Below are some of the best funds within each asset class. The average current yield of the portfolio presented below is roughly 9%.
The funds/securities that we would consider in a long-term portfolio would be: (CHI), (UTF), (UTG), (PDI), (BBN), (FFC), (BST), (HQL), (MMP), (USA), and (RQI).
Note: Please note that MMP (Magellan Midstream Partners) is an MLP (Master Limited Partnership) in the midstream energy sector. As a partnership, it issues form K-1 for tax purposes instead of 1099-div.
The stock market is at a critical juncture and is giving highly mixed signals. On the one hand, we have 40-year high inflation, unresolved supply-chain issues, and record-breaking high energy prices. We have seen an array of large companies that have issued warnings about their future earnings. The Fed is obviously behind the curve, and it appears that it may be trying to slow an economy that may already be slowing, which is exactly a recipe for a recession. On the other hand, we have a job market that is still strong, unemployment is low, and workers are hard to find. In a nutshell, there are a lot of uncertainties in the market. It is very hard to say if the market is going to recover from its current depressed levels or if it is just the beginning of a deeper downturn?
Nonetheless, as long-term investors, we have a different approach to facing uncertainties in the market. We invest in a set of three strategies (alternatively called the three-bucket strategy) that provide an extra layer of safety and diversification. Above all, this approach (that we call as NPP strategy) should generate a very decent income of 5%, provide protection from bigger drawdowns, and provide at least 10% overall growth (if not more) in the long term.
These strategies require a long-term investment horizon, a lot of discipline, and some time and effort on a monthly basis, especially in managing the Rotational part of the portfolio. So, in that sense, this strategy may not suit everyone. Also, the Rotational strategies work best inside a tax-deferred account. However, if you are a long-term investor and determined to save and build wealth in a more systematic and stress-free way, then this strategy may be right for you.
Years ago, after I received some negative feedback at work, my husband Laurence told me something that stuck with me: when we receive criticism, we go through three stages. The first, he said, with apologies for the language, is, “Fuck you.” The second is “I suck.” And the third is “Let’s make it better.”
I recognised immediately that this is true, and that I was stuck at stage two. It’s my go-to in times of trouble, an almost comfortable place where I am protected from further disapproval because no matter how bad someone is about to tell me I am, I already know it. Depending on your personality, you may be more likely to stay at stage one, confident in your excellence and cursing the idiocy of your critics. The problem, Laurence continued, is being unable to move on to stage three, the only productive stage.
Recently, I asked my husband if he could remember who had come up with the three-stage feedback model. He said it was Bradley Whitford, the Emmy-award winning actor who played the charismatic Josh Lyman in The West Wing and, among other roles, the scary dad in the 2017 horror movie Get Out. “What? I would definitely have remembered that. There is no way that would have slipped my mind,” I insisted, especially because I had a mini-crush on the Lyman character for four of The West Wing’s seven series.
In 20 seconds flat, I had my laptop open and was putting one of my few superpowers, googling, to use. There it was. Whitford has aired this theory in public at least twice. Once during a 2012 talk at his alma mater, Wesleyan University, and again when he was interviewed on Marc Maron’s podcast in 2018.
To Maron, Whitford put it like this: “If I’m honest, anytime any director has ever said anything to me, I go through three silent beats: Fuck you. I suck. OK, what?” He added: “I really believe that that is a universal response and some people get stuck on ‘I suck’. You know people who live there. Some people live on ‘Fuck you’. Most people pretty quick get to the [third stage].” I realised that while Laurence said the third stage was “Let’s make it better”, Whitford’s original was the more ambiguous “Okay, what?”
Feedback is part of our everyday existence. It is widely viewed as crucial to improving our performance at work, in education and the quality of our relationships. Most white-collar professionals partake in some form of annual appraisal, performance development review or 360-degree feedback, in which peers, subordinates and managers submit praise and criticism. Performance management is a big business; the global market for feedback software alone was worth $1.37bn in 2020.
I decided to try to contact Whitford to find out more. But first, I wanted to know if there was any empirical evidence to back up his idea, and to learn how to leapfrog stages one and two and get to stage three as quickly as possible.
In 2019, I came across a book on a colleague’s desk titled Radical Candor, written by a former Google and Apple executive named Kim Scott. At the time, I was covering my boss’s maternity leave and, as I encountered the niggling issues that beset every team, I became interested, for the first time in my life, in management theory. The book’s title resonated with me. Who wouldn’t want to hear a truly honest assessment of their performance if it would help them improve?
When we feel optimistic about feedback, we imagine the kind of insights a good therapist might offer, gentle but piercing appraisals of our strengths and weaknesses, precious gems of knowledge sharp enough to cut through our self-delusions and insecurities. On a deeper level, many of us crave the thrill of being known, of being truly understood.
Of course, this is not what feedback is actually like.
We overestimate the capacity of our colleagues to calibrate their comments to our individual emotional states. We underestimate how bruising it is to hear that we are not meeting expectations, even when the issues are minor. And we can be surprised by critiques that do not line up with our sense of who we are. If you believe you’re a great listener and your 360-degree feedback comes back with complaints that you monopolise meetings, that may not feel like being known so much as feeling alien to yourself.
And yet we all have blind spots. As the psychologists David Dunning and Justin Kruger showed in a 1999 study, when we are unskilled in a particular field, we are more likely to overrate our ability in that area. Our incompetence makes it all the harder for us to understand how bad we are, a phenomenon now widely known at the Dunning-Kruger effect. This is one reason why feedback can be so necessary.
One of Scott’s fundamental beliefs is that there is nothing kind in keeping quiet about a colleague’s weaknesses. She calls this “ruinous empathy”. Scott is a two-word-catchphrase-generating machine. While aiming to achieve “radical candour”, you need to avoid “manipulative insincerity” and “obnoxious aggression”. The key in giving feedback, she writes in her book, is to “care personally” while “challenging directly”.
One of her favourite examples of radical candour in her own life is from 2004, soon after she joined Google to run sales for its AdSense team. She had just given a presentation to chief executive Eric Schmidt and Google’s founders, and was feeling pretty good, when Sheryl Sandberg, then a vice-president at the company and her boss, took her to one side. After congratulating her, Sandberg said: “You said ‘um’ a lot. Were you aware of it?” Scott brushed the comment off. Sandberg said she could recommend a speech coach and Google would pay. Scott again tried to move on, feeling it was a minor issue.
Sandberg grasped the nettle: “You are one of the smartest people I know, but saying ‘um’ so much makes you sound stupid.” In the book, Scott describes this moment as revelatory. She went to a speech coach and began thinking about how to teach others to adopt a more candid style of management.
When I email Scott to ask if she’ll talk about feedback, she replies promptly. She lives in a quiet, hilly neighbourhood in the San Francisco Bay Area, a 15-minute drive from the Google and Apple campuses, and suggests a video call at 7.30am her time. She logs on from her kitchen, early morning light pouring in through large windows behind her and bouncing off stainless steel surfaces.
A petite 54-year-old with rimless glasses, shoulder-length blonde hair and irrepressible energy, her preferred uniform is a T-shirt, jeans and an orange zip-up cardigan. I notice she wears the same cardigan in multiple TED-style talks. She later tells me she has 12 of them, in different weights, for summer, autumn and winter. She’s had so much flak about her clothes throughout her career that she decided to wear the same thing every day.
“I’m going to apologise because there’s going to be some background noise, I’m making eggs for my son,” she says cheerfully. Of course, it’s so early, I say, should we reschedule? “No, no, no . . . I’ve been up for a while, I have to just pay attention to the water boiling, that’s all.” She is cordial but brisk. I realise I am speaking to a highly productive person who is a scheduling master. I feel the urge not to waste her time.
Radical Candor was published in 2017 and became a New York Times bestseller. I begin by explaining the Whitford hypothesis. Does it ring true to her, a workplace guru who has made the art of giving feedback her speciality? “Yes, absolutely,” she says. But she would add an earlier stage: soliciting feedback. A phrase like “Do you have any feedback for me?” is bad, she says, because most people will simply respond “No.” It’s easier to pretend everything’s fine than to enter the awkward zone of giving criticism. “Nobody wants to give you feedback. Except your children.”
A good question, she says, is one that cannot be answered with a yes or no. Her preference is, “What can I do, or stop doing, to make it easier to work with me?” Even this question has been subject to, well, feedback. “Christa Quarles, when she was CEO of OpenTable, said, ‘I hate that question!’” Scott recalls. Quarles, who became friends with Scott after attending one of her talks, prefers asking, “Tell me what I’m doing wrong,” which Scott says is fine too.
Because she now coaches top executives at companies that have included Ford and IBM, Scott comes from a different angle than most. (Her book is subtitled: Be a Kick-Ass Boss Without Losing Your Humanity.) Managers who need feedback must somehow persuade employees to be honest with them despite their authority and the nervousness it can create. For the rest of us, feedback usually comes whether we ask for it or not.
I tell her that since childhood I have struggled not to take it personally and can tear up in the face of criticism, a trait I find infuriating and embarrassing. “I am a weeper myself,” she says, to my surprise, and suddenly switches to a more confiding tone. “My grandmother told me this when I was a child. I forget what I was in trouble for, but I was getting some critical feedback, and she sat me down and said, ‘Look Kim, if you can learn to listen when people are criticising you, and decide what bits are going to help you be better, you’ll be a stronger person.’”
It strikes me as very Kim Scott to describe a childhood scolding as “getting some critical feedback”. But it also pleases me to think there is a direct line from her grandmother’s advice to her successful career. And her grandmother was right. Research shows that a decisive factor in the effectiveness of feedback is whether we see it as an opportunity to grow or as a fixed verdict on our ability.
This holds true even when we are merely anticipating feedback. In a 1995 study by academics from the University of California, Riverside, children were split into two groups to solve maths problems. One was informed the aim was to “help you learn new things”. The other was told: “How you do . . . helps us know how smart you are in math and what kind of grade you might get.” The first group solved more problems.
What was your most memorable experience of feedback, given or received? And what has it taught you? Let us know in the comments below. We may publish a selection of responses on ft.com
In 2018, Scott received disruptive feedback when the satirical television show Silicon Valley featured a character who espouses “Rad Can”, a clear reference to her philosophy. The problem was that the character in question was a bully. Scott was on a plane when the episode aired. “I landed in London, and my phone just blew up,” she says. “I was devastated.”
The experience prompted her to write a second edition of the book. In its preface, she notes that some people were using her theories “as a licence to behave like jerks” and suggests readers substitute the word “compassionate” for “radical”. Scott got to stage three in the Whitford model pretty quickly, I suggest. “It really was useful,” she says of the TV episode. “It was painful and it was annoying, but there was something to learn.”
I wonder if there are some personality types that are better at responding in this way, but Scott argues we can all learn to be more resilient. She recommends listening with the intent to understand, not to respond. “Not responding straight away helps me avoid the ‘FU’ part,” she says. She also leans on a technique from psychology in which you observe your emotions with curiosity. “Part of what helps is to identify the feeling in your body. If you feel shame, for me, it’s a tingling feeling in the back of my knees, kind of the same feeling I get if I walk to the edge of a precipice . . . When I recognise I’m having that feeling, then I can take a step back and take a few breaths.”
Shame is the feeling I most associate with negative feedback. When I was 10, my class was told to make small 3D buildings out of paper. I cut carefully around the outlines of a cuboid and a prism, ran a glue stick over tabs at the edges and pressed them together in sequence. Sellotape was also employed. The teacher asked us to bring the models to him. I walked to his desk and handed mine over. He gazed at it in silence. After a long pause, he said: “You’re not very good with your hands, are you?”
For most of human history, this kind of feedback was the norm: direct and, at times, brutal. As recently as a few decades ago, it was also how performance at work was managed. In the early 1970s, the oral historian Studs Terkel interviewed more than 100 Americans about their jobs for his book Working. A steel mill worker named Mike Lefevre described being “chewed out” by his foreman, who told him, “Mike you’re a good worker, but you have a bad attitude.”
A 47-year-old Chicago bus driver recalled the humiliation of being told off by supervisors in public: “Some of them have the habit of wanting to bawl you out there on the street. That’s one of the most upsetting parts of it.” Nancy Rogers, a bank teller, said she was yelled at by her boss and had given some thought to why this might be: “He’s about 50, in a position that he doesn’t really enjoy. He’s been there for a long time and hasn’t really advanced that much.”
Yelling, screaming, bawling out. This is the kind of feedback that has become unacceptable in most workplaces. And not just because it’s hurtful and rude, or because we’ve all become “snowflakes”. It’s unproductive. A large volume of research shows criticism conveyed this way demotivates. Fearful, aggrieved people are less able to focus on the tasks at hand and are more likely to doubt themselves, resent their boss and possibly attempt armchair psychoanalysis, à la Rogers.
The type of criticism Lefevre received can be particularly destructive. Being told you have a bad attitude is what researchers call “ego-involving feedback”, which prompts the listener to believe they can’t change, that the failure is intrinsic to who they are. The teacher who said I wasn’t good with my hands was similarly generalising from a specific task, says Naomi Winstone, a professor of educational psychology at the Surrey Institute of Education. “It’s really terrible as a piece of feedback because it gives the impression that it’s fixed: you will always not be good.”
While research into the giving of feedback has been around since the early 20th century, the question of how we receive it has been less studied. Winstone, a warm, empathetic 39-year-old with a background in cognitive psychology, noticed the relative lack of research in 2013, when, as a director of undergraduate studies, she was tasked with improving students’ experience of assessment and feedback. She felt she could use her training to understand the barriers that keep students from acting on constructive criticism. “We assume that using feedback is just this amazing, in-built skill that we all know how to do effectively. We really don’t,” she says.
Winstone believes the ability to process feedback needs to be developed when we are young, like critical thinking. One of the projects she’s working on is titled “Everybody Hurts”, inspired by an idea first suggested by two medical education certified in Australia, Margaret Bearman and Elizabeth Molloy. They argued that to help students learn to cope with feedback, teachers should open up about their own failures. Bearman and Molloy named this “intellectual streaking”, but in a confirmation of my theory that anyone working in feedback becomes very responsive to feedback, they renamed it “intellectual candour” after an editor felt the reference to nudity was inappropriate.
Another Australian academic, Phillip Dawson, took intellectual streaking to heart. In 2018 he wrote a blog post, with endearing honesty, that bullet-pointed his typical reaction to negative comments:
Have an immediate affective response. This is usually some sort of hurt, though I’ve also felt anger, elation, stress, pride, shame and confusion.
Hide the comments so they can’t hurt me.
Make a to-do note to give the comments a proper look later on.
[Time passes, often to the point where I now have to look at the comments again]
Experience the same hurt from step 1 all over again.
Use the comments to Excellerate my work.
A soft-spoken 39-year-old professor with curly brown hair, Dawson tells me over video call from Melbourne that he feels shame if he knows he has underperformed at work relative to his ability. But in his free time, he does stand-up comedy and, in that context, his impulse is to go to Whitford’s “stage one”. (He’s too polite to say the F word.) “And it kills me. Because I know that in my professional life, I’m better at it. So I don’t think we have a universal capability with feedback. It’s very contextual.”
Dawson recommends pausing when you receive criticism. Once you feel calm, try rewriting the feedback into a list of actions. “By rewriting, I’m making them tasks I assign myself,” he says. This “defangs” the feedback and allows you to take ownership of the next steps. He also recommends Thanks for the Feedback, a 2015 book by Douglas Stone and Sheila Heen, two lecturers at Harvard Law School who specialise in conflict resolution. They argue that feedback comes in three types: appreciation, coaching and evaluation. Problems arise when we expect one but get another. Often we simply crave a “Well done” or “Thank you”, and it’s jarring when we receive a tough evaluation instead. “I’ve found that to be really useful,” Dawson says, laughing. “It’s OK to want praise!”
I’m starting to feel I’ve got on top of the feedback question when I interview Avraham Kluger, co-author of one of the seminal pieces of research in the history of feedback studies. “I wonder if we could start by talking about your 1996 paper?” I ask. There is a long pause, so long that I wonder if my internet has frozen. I am at home in London. Kluger, a 63-year-old professor of organisational behaviour at Hebrew University Business School, is in Jerusalem.
It turns out the internet’s fine. He was just thinking. Kluger finally responds: “Yeah, I can tell you that. But I want to ask you another question, about the hidden assumptions, or the principal suppositions, behind your question.” There is another pause. “Why do we care about feedback to begin with? Why do we want to give feedback at all?”
I repeat his last question out loud, hesitantly. Is he really challenging the whole premise of feedback? Essentially, yes. We give it, he argues, because we hope to change the behaviour of another person. But often the person already knows there is a problem. “They don’t change because they don’t have the inner resources,” he says. His tone of voice is suddenly scathing, not scathing towards the people who can’t change, but towards those who assume they can do it for them.
Kluger’s journey to becoming a feedback-sceptic took decades. He was born in Tel Aviv in 1958, the son of Holocaust survivors. After studying psychology at university, he took a job in 1984 as a behavioural consultant to a police force in Israel. Hired to apply psychological principles to the management of police officers, he began by interviewing the regional chief of police’s direct reports. The subordinates complained that they received zero feedback from their boss.
Kluger took notes and presented his findings a few weeks later in a senior leadership meeting. Not long after he began speaking, the chief of police interrupted. “It’s over!” he apparently yelled, slamming his fist on the table. “I have been in the police force for 40 years. I came from this rank” – Kluger, re-enacting the scene for me, points to an invisible badge on his upper arm – “to this rank” – pointing to his shoulder – “and I am telling you, a good policeman does not need feedback. If he does need feedback he’s not a good policeman.” The chief turned to his secretary. “What’s next on the agenda?”
In trying to give feedback, Kluger had received some seriously negative feedback. Later, he would decide he’d made two mistakes. Although he had interviewed all of the subordinates, he had not interviewed the chief of police. And he had made his report in public. Criticising someone in front of others inflicts a particular kind of humiliation.
For all its painfulness, the episode was ultimately useful. Kluger became curious about what the academic literature did not understand about feedback and its effects on motivation. The following year he began a PhD to investigate this at the Stevens Institute of Technology in New Jersey. He devised an experiment in which he gave some engineers a set of test questions. One group was told after each question whether they’d got it right or wrong. The other group was given no feedback at all. Once the engineers had finished the questions, Kluger announced that the experiment was over but if anyone wanted to continue working, they could. To his astonishment, the people who had received no feedback at all were the most motivated to continue.
In 1989, Kluger got an assistant professorship at Rutgers University’s School of Management. Among the first people he met was Angelo DeNisi, a gregarious New Yorker from the Bronx. When Kluger told him he was studying the destructive effects of feedback on performance, DeNisi was intrigued. “My career is based on performance appraisal and finding ways to make it more accurate. You’re telling me the assumptions are incorrect?” he asked. “Yes, I’m afraid I am,” replied Kluger.
It was the start of a long friendship. “He’s Angelo, but he was an angel to me, in a way, to my career”, Kluger says. DeNisi was more experienced and had connections. The two reviewed hundreds of feedback experiments going back to 1905. What they found was explosive. In 38 per cent of cases, feedback not only did not Excellerate performance, it actively made it worse. Even positive feedback could backfire. “This was heresy,” DeNisi recalls.
The way he tells it, his main function in getting the research published was to render Kluger’s sometimes impenetrable thinking lucid. “My role was to translate Avi’s ideas to the rest of the world. Avi has a way of thinking, that . . . ” DeNisi says, trailing off. “He’s brilliant, he truly is. But oftentimes his thinking isn’t linear. It goes round and round in circles. I inserted the linear thinking. But the ideas, the heart of the paper, is Avi.”
In 1996, they published their meta-analysis. It won awards and became one of the most-cited in the field. The two men would work together again, but their paths diverged. Kluger moved back to Israel and eventually became disenchanted with the entire subject. He no longer describes himself as a feedback researcher. He came to believe that as a performance management tool, it is so flawed, so risky and so unpredictable, that it is only worth using in limited circumstances, such as when safety rules must be enforced. If a construction worker keeps walking around a site without a helmet, negative feedback is vital, Kluger acknowledges. The most effective way to give it is with great clarity about potential consequences. The worker should be told that the next time they go without a helmet, he or she will be fired.
But in many other types of work, the formula for good feedback includes too many variables: the personality of the recipient, their motivations, whether they believe they are capable of implementing change, the abilities of the manager. Kluger now calls himself a researcher of listening. Instead of managers giving top-down feedback, he argues they should spend more time listening to their direct reports. In the process of talking in depth about their work, the subordinate will often recognise issues and decide to correct them on their own.
Based on this theory, Kluger developed something he calls the “feed-forward interview” as an alternative, or prologue, to a performance review. He offers to give me a demo. A week after our first conversation, we meet again over video call. I feel slightly nauseous, wondering what I’ve signed up for.
It is a curiously intimate process. He asks me to recall, in great detail, a time that I felt full of life at work. Full of energy. Maybe even happy. I describe a reporting trip to meet a source and how it felt when I realised I was being told something important, that the person I was speaking to had a story to tell. “What was it like?” he asks. “Like a lightbulb going on,” I reply. Kluger is working from a script, which he adapts to each person he interviews, and some of his techniques are borrowed from therapy. “I want to make sure I heard you,” he says, then repeats back to me what I’ve said. “Let’s explore what made this possible — what was materially important?” Sometimes he gives me better words than the ones I used initially. “You needed autonomy to make this happen, correct?” he asks. “Yes, exactly,” I say.
At the end of the session, he sums up. “I want to suggest that the conditions that we just enumerated are part of the inner code of Esther flourishing at work.” It feels like he’s awarding me a prize. He asks me to visualise this inner code as a lighthouse beaming from the shore, a safe harbour. He holds up a hand and begins opening and closing his fist, to mimic the lighthouse flashing. “Imagine you’re the captain of the ship of your life.” Kluger brings up his other hand to represent a boat. “To what degree are you navigating towards the light of those conditions? Or are you sailing away?”
Being truly listened to is exhilarating. As Kluger intended, I end up seeing work from a new perspective and giving myself some critical feedback about my priorities. But I’m not sure all managers would want their employees to go on a similar journey, one which is potentially unsettling and could lead them to rethink their choices. And it’s not exactly feedback. Of course that’s the point.
Months after I first started thinking about this subject, I have lunch with a friend who tells me a colleague frequently criticises her. It’s demoralising, especially as the person never praises even excellent work. “How should I respond?” my friend asks. I sit back and think. Despite all the time I’ve spent researching feedback, I’m unsure what to advise.
Kim Scott notes there will be times when feedback is wrong. Look for the five or 10 per cent that you can agree with, and fix that problem “theatrically”, she says. Later, once you’re out of the “Fuck you” and “I suck” stages, you should have a respectful conversation, explaining how you disagree. A respectful disagreement can strengthen a bond, she believes. Winstone, the educational psychology professor, suggests going back to the feedback-giver and saying, “This is why I don’t think this is the case. Can we talk about it?”
Sometimes feedback is really bias or bullying. If what your boss is delivering is obnoxious aggression, “Locate the exit nearest you,” Scott advises. “Having a boss that is bullying is damaging to your health. It’s a big deal.”
Much of how we respond to feedback is driven by the nature of our relationship with the person giving it. This is why Kluger believes it’s useless to focus on the recipient of feedback alone. The outcome will always depend on the “dyad” — the sociological term for two people in a particular relationship — and what transpires between them.
Kluger still sometimes sends work-in-progress to his friend and former research partner DeNisi. DeNisi recently told him that a paper was hard to follow and needed more work. Kluger told his wife, who said: “See, that’s why Angelo’s a friend. Because he tells you the truth. You should listen to him.”
“You gave him good feedback!” I tell DeNisi. “Yes, and he listened,” he says, beaming. It reminds me of a piece of research Kluger told me about, which theorises we’re more likely to accept negative feedback if we feel loved by the provider. “I’m not talking about romantic love,” Kluger said. “But if you really feel loved and cared for by the provider, then you’re most likely to accept it and to process it.”
I try every way I can to contact Bradley Whitford. I email his agency and leave a voicemail. One agent emails to tell me I have the wrong person and gives me his publicist’s contact details instead. She doesn’t reply. I write one of those embarrassing public tweets, essentially begging him to talk to me or answer some questions over email. Finally, I receive a response from an assistant: “Thanks so much for thinking of Bradley. He is not available this time around, but I will definitely let you know should anything change.”
I go through the three stages pretty quickly. Whitford has better things to do, and I’m grateful to him anyway. Now when I receive negative feedback, just identifying I’m at stage one or two helps speed me along. And his theory set me on a path that showed me it’s normal to react emotionally to criticism and that it doesn’t mean you can’t learn from it. If you found any of this remotely helpful, you can thank Whitford too. If you didn’t, I welcome your feedback.
Esther Bintliff is deputy editor of FT Weekend Magazine
Follow @FTMag on Twitter to find out about our latest stories first
As we exited the isolation economy last year, we introduced supercloud as a term to describe something new that was happening in the world of cloud computing.
In this Breaking Analysis, we address the ten most frequently asked questions we get on supercloud. Today we’ll address the following frequently asked questions:
1. In an industry full of hype and buzzwords, why does anyone need a new term?
2. Aren’t hyperscalers building out superclouds? We’ll try to answer why the term supercloud connotes something different from a hyperscale cloud.
3. We’ll talk about the problems superclouds solve.
4. We’ll further define the critical aspects of a supercloud architecture.
5. We often get asked: Isn’t this just multicloud? Well, we don’t think so and we’ll explain why.
6. In an earlier episode we introduced the notion of superPaaS – well, isn’t a plain vanilla PaaS already a superPaaS? Again – we don’t think so and we’ll explain why.
7. Who will actually build (and who are the players currently building) superclouds?
8. What workloads and services will run on superclouds?
9. What are some examples of supercloud?
10. Finally, we’ll answer what you can expect next on supercloud from SiliconANGLE and theCUBE.
Late last year, ahead of Amazon Web Services Inc.’s re:Invent conference, we were inspired by a post from Jerry Chen called Castles in the Cloud. In that blog he introduced the idea that there were submarkets emerging in cloud that presented opportunities for investors and entrepreneurs, that the big cloud vendors weren’t going to suck all the value out of the industry. And so we introduced this notion of supercloud to describe what we saw as a value layer emerging above the hyperscalers’ “capex gift.”
It turns out that we weren’t the only ones using the term, as both Cornell and MIT have used the phrase in somewhat similar but different contexts.
The point is something new was happening in the AWS and other ecosystems. It was more than infrastructure as a service and platform as a service and wasn’t just software as a service running in the cloud.
It was a new architecture that integrates infrastructure, unique platform attributes and software to solve new problems that the cloud vendors in our view weren’t addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud.
In addition, we felt this trend pointed to structural change going on at the industry level that supercloud metaphorically was highlighting.
So that’s the background on why we felt a new catchphrase was warranted. Love it or hate it… it’s memorable.
To that last point about structural industry transformation: Andy Rappaport is sometimes credited with identifying the shift from the vertically integrated mainframe era to the horizontally fragmented personal computer- and microprocessor-based era in his Harvard Business Review article from 1991.
In fact, it was actually David Moschella, an International Data Corp. senior vice president at the time, who introduced the concept in 1987, a full four years before Rappaport’s article was published. Moschella, along with IDC’s head of research Will Zachmann, saw that it was clear Intel Corp., Microsoft Corp., Seagate Technology and other would replace the system vendors’ dominance.
In fact, Zachmann accurately predicted in the late 1980s the demise of IBM, well ahead of its epic downfall when the company lost approximately 75% of its value. At an IDC Briefing Session (now called Directions), Moschella put forth a graphic that looked similar to the first two concepts on the chart below.
We don’t have to review the shift from IBM as the epicenter of the industry to Wintel – that’s well-understood.
What isn’t as widely discussed is a structural concept Moschella put out in 2018 in his book “Seeing Digital,” which introduced the idea of the Matrix shown on the righthand side of this chart. Moschella posited that a new digital platform of services was emerging built on top of the internet, hyperscale clouds and other intelligent technologies that would define the next era of computing.
He used the term matrix because the conceptual depiction included horizontal technology rows, like the cloud… but for the first time included connected industry columns. Moschella pointed out that historically, industry verticals had a closed value chain or stack of research and development, production, distribution, etc., and that expertise in that specific vertical was critical to success. But now, because of digital and data, for the first time, companies were able to jump industries and compete using data. Amazon in content, payments and groceries… Apple in payments and content… and so forth. Data was now the unifying enabler and this marked a changing structure of the technology landscape.
Listen to David Moschella explain the Matrix and its implications on a new generation of leadership in tech.
So the term supercloud is meant to imply more than running in hyperscale clouds. Rather, it’s a new type of digital platform comprising a combination of multiple technologies – enabled by cloud scale – with new industry participants from financial services, healthcare, manufacturing, energy, media and virtually all industries. Think of it as kind of an extension of “every company is a software company.”
Basically, thanks to the cloud, every company in every industry now has the opportunity to build their own supercloud. We’ll come back to that.
Let’s address what’s different about superclouds relative to hyperscale clouds.
This one’s pretty straightforward and obvious. Hyperscale clouds are walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their cloud, so they’re meeting customers where their data lives with initiatives such Amazon Outposts and Azure Arc and Google Anthos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, costs and performance they can deliver. The more complex the environment, the more difficult to deliver on their promises and the less margin left for them to capture.
Will the hyperscalers get more serious about cross cloud services? Maybe, but they have plenty of work to do within their own clouds. And today at least they appear to be providing the tools that will enable others to build superclouds on top of their platforms. That said, we never say never when it comes to companies such as AWS. And for sure we see AWS delivering more integrated digital services such as Amazon Connect to solve problems in a specific domain, call centers in this case.
We’ve all seen the stats from IDC or Gartner or whomever that customers on average use more than one cloud. And we know these clouds operate in disconnected silos for the most part. That’s a problem because each cloud requires different skills. The development environment is different, as is the operating environment, with different APIs and primitives and management tools that are optimized for each respective hyperscale cloud. Their functions and value props don’t extend to their competitors’ clouds. Why would they?
As a result, there’s friction when moving between different clouds. It’s hard to share data, move work, secure and govern data, and enforce organizational policies and edicts across clouds.
Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations and share data safely irrespective of location.
Pretty straightforward, but nontrivial, which is why we often ask company chief executives and execs if stock buybacks and dividends will yield as much return as building out superclouds that solve really specific problems and create differentiable value for their firms.
Let’s dig in a bit more to the architectural aspects of supercloud. In other words… what are the salient attributes that define supercloud?
First, a supercloud runs a set of specific services, designed to solve a unique problem. Superclouds offer seamless, consumption-based services across multiple distributed clouds.
Supercloud leverages the underlying cloud-native tooling of a hyperscale cloud but it’s optimized for a specific objective that aligns with the problem it’s solving. For example, it may be optimized for cost or low latency or sharing data or governance or security or higher performance networking. But the point is, the collection of services delivered is focused on unique value that isn’t being delivered by the hyperscalers across clouds.
A supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud and using its own specific platform-as-a-service tooling, creates a common experience across clouds for developers and users. In other words, the superPaaS ensures that the developer and user experience is identical, irrespective of which cloud or location is running the workload.
And it does so in an efficient manner, meaning it has the metadata knowledge and management that can optimize for latency, bandwidth, recovery, data sovereignty or whatever unique value the supercloud is delivering for the specific use cases in the domain.
A supercloud comprises a superPaaS capability that allows ecosystem partners to add incremental value on top of the supercloud platform to fill gaps, accelerate features and innovate. A superPaaS can use open tooling but applies those development tools to create a unique and specific experience supporting the design objectives of the supercloud.
Supercloud services can be infrastructure-related, application services, data services, security services, users services, etc., designed and packaged to bring unique value to customers… again that the hyperscalers are not delivering across clouds or on-premises.
Finally, these attributes are highly automated where possible. Superclouds take a page from hyperscalers in terms of minimizing human intervention wherever possible, applying automation to the specific problem they’re solving.
What we’d say to that is: Perhaps, but not really. Call it multicloud 2.0 if you want to invoke a commonly used format. But as Dell’s Chuck Whitten proclaimed, multicloud by design is different than multicloud by default.
What he means is that, to date, multicloud has largely been a symptom of multivendor… or of M&A. And when you look at most so-called multicloud implementations, you see things like an on-prem stack wrapped in a container and hosted on a specific cloud.
Or increasingly a technology vendor has done the work of building a cloud-native version of its stack and running it on a specific cloud… but historically it has been a unique experience within each cloud with no connection between the cloud silos. And certainly not a common developer experience with metadata management across clouds.
Supercloud sets out to build incremental value across clouds and above hyperscale capex that goes beyond cloud compatibility within each cloud. So if you want to call it multicloud 2.0, that’s fine.
We choose to call it supercloud.
Well, we’d say no. That supercloud and its corresponding superPaaS layer gives the freedom to store, process, manage, secure and connect islands of data across a continuum with a common developer experience across clouds.
Importantly, the sets of services are designed to support the supercloud’s objectives – e.g., data sharing or data protection or storage and retrieval or cost optimization or ultra-low latency, etc. In other words, the services offered are specific to that supercloud and will vary by each offering. OpenShift, for example, can be used to construct a superPaaS but in and of itself isn’t a superPaaS. It’s generic.
The point is that a supercloud and its inherent superPaaS will be optimized to solve specific problems such as low latency for distributed databases or fast backup and recovery and ransomware protection — highly specific use cases that the supercloud is designed to solve for.
SaaS as well is a subset of supercloud. Most SaaS platforms either run in their own cloud or have bits and pieces running in public clouds (e.g. analytics). But the cross-cloud services are few and far between or often nonexistent. We believe SaaS vendors must evolve and adopt supercloud to offer distributed solutions across cloud platforms and stretching out to the near and far edge.
Another question we often get is: Who has a supercloud and who is building a supercloud? Who are the contenders?
Well, most companies that consider themselves cloud players will, we believe, be building superclouds. Above is a common Enterprise Technology Research graphic we like to show with Net Score or spending momentum on the Y axis and Overlap or pervasiveness in the ETR surveys on the X axis. This is from the April survey of well over 1,000 chief executive officers and information technology buyers. And we’ve randomly chosen a number of players we think are in the supercloud mix and we’ve included the hyperscalers because they are the enablers.
We’ve added some of those nontraditional industry players we see building superclouds such as Capital One, Goldman Sachs and Walmart, in deference to Moschella’s observation about verticals. This goes back to every company being a software company. And rather than pattern-matching an outdated SaaS model we see a new industry structure emerging where software and data and tools specific to an industry will lead the next wave of innovation via the buildout of intelligent digital platforms.
We’ve talked a lot about Snowflake Inc.’s Data Cloud as an example of supercloud, as well as the momentum of Databricks Inc. (not shown above). VMware Inc. is clearly going after cross-cloud services. Basically every large company we see is either pursuing supercloud initiatives or thinking about it. Dell Technologies Inc., for example, showed Project Alpine at Dell Technologies World – that’s a supercloud in development. Snowflake introducing a new app dev capability based on its SuperPaaS (our term, of course, it doesn’t use the phrase), MongoDB Inc., Couchbase Inc., Nutanix Inc., Veeam Software, CrowdStrike Holdings Inc., Okta Inc. and Zscaler Inc. Even the likes of Cisco Systems Inc. and Hewlett Packard Enterprise Co., in our view, will be building superclouds.
Although ironically, as an aside, Fidelma Russo, HPE’s chief technology officer, said on theCUBE she wasn’t a fan of cloaking mechanisms. But when we spoke to HPE’s head of storage services, Omer Asad, we felt his team is clearly headed in a direction that we would consider supercloud. It could be semantics or it could be that parts of HPE are in a better position to execute on supercloud. Storage is an obvious starting point. The same can be said of Dell.
Listen to Fidelma Russo explain her aversion to building a manager of managers.
And we’re seeing emerging companies like Aviatrix Systems Inc. (network performance), Starburst Data Inc. (self-service analytics for distributed data), Clumio Inc. (data protection – not supercloud today but working on it) and others building versions of superclouds that solve a specific problem for their customers. And we’ve spoken to independent software vendors such as Adobe Systems Inc., Automatic Data Processing LLC and UiPath Inc., which are all looking at new ways to go beyond the SaaS model and add value within cloud ecosystems, in particular building data services that are unique to their value proposition and will run across clouds.
So yeah – pretty much every tech vendor with any size or momentum and new industry players are coming out of hiding and competing… building superclouds. Many that look a lot like Moschella’s matrix with machine intelligence and artificial intelligence and blockchains and virtual reality and gaming… all enabled by the internet and hyperscale clouds.
It’s moving fast and it’s the future, in our opinion, so don’t get too caught up in the past or you’ll be left behind.
We’ve given many in the past, but let’s try to be a bit more specific. Below we cite a few and we’ll answer two questions in one section here: What workloads and services will run in superclouds and what are some examples?
Analytics. Snowflake is the furthest along with its data cloud in our view. It’s a supercloud optimized for data sharing, governance, query performance, security, ecosystem enablement and ultimately monetization. Snowflake is now bringing in new data types and open-source tooling and it ticks the attribute boxes on supercloud we laid out earlier.
Converged databases. Running transaction and analytics workloads. Take a look at what Couchbase is doing with Capella and how it’s enabling stretching the cloud to the edge with Arm-based platforms and optimizing for low latency across clouds and out to the edge.
Document database workloads. Look at MongoDB – a developer-friendly platform that with Atlas is moving to a supercloud model running document databases very efficiently. Accommodating analytic workloads and creating a common developer experience across clouds.
Data science workloads. For example, Databricks is bringing a common experience for data scientists and data engineers driving machine intelligence into applications and fixing the broken data lake with the emergence of the lakehouse.
General-purpose workloads. For example, VMware’s domain. Very clearly there’s a need to create a common operating environment across clouds and on-prem and out to the edge and VMware is hard at work on that — managing and moving workloads, balancing workloads and being able to recover very quickly across clouds.
Network routing. This is the primary focus of Aviatrix, building what we consider a supercloud and optimizing network performance and automating security across clouds.
Industry-specific workloads. For example, Capital One announcing its cost optimization platform for Snowflake – piggybacking on Snowflake’s supercloud. We believe it’s going to test that concept outside its own organization and expand across other clouds as Snowflake grows its business beyond AWS. Walmart Inc. is working with Microsoft to create an on-prem to Azure experience – yes, that counts. We’ve written about what Goldman is doing and you can bet dollars to donuts that Oracle Corp. will be building a supercloud in healthcare with its Cerner acquisition.
Supercloud is everywhere you look. Sorry, naysayers. It’s happening.
With all the industry buzz and debate about the future, John Furrier and the team at SiliconANGLE have decided to host an event on supercloud. We’re motivated and inspired to further the conversation. TheCUBE on Supercloud is coming.
On Aug. 9 out of our Palo Alto studios we’ll be running a live program on the topic. We’ve reached out to a number of industry participants — VMware, Snowflake, Confluent, Sky High Security, Hashicorp, Cloudflare and Red Hat — to get the perspective of technologists building superclouds.
And we’ve invited a number of vertical industry participants in financial services, healthcare and retail that we’re excited to have on along with analysts, thought leaders and investors.
We’ll have more details in the coming weeks, but for now if you’re interested please reach out to us with how you think you can advance the discussion and we’ll see if we can fit you in.
So mark your calendars and stay tuned for more information.
Thanks to Alex Myerson, who does the production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.
Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.
Email email@example.com, DM @dvellante on Twitter and comment on our LinkedIn posts.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at firstname.lastname@example.org.
Here’s the full video analysis:
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.
Each year the World Economic Forum publishes its Global Risks Report, which aims to outline the biggest threats facing society in the year ahead. The 2022 edition features many common sights, including climate action failure, extreme weather, and biodiversity loss. Such catastrophic events often succumb to the so-called availability heuristic, whereby we're naturally drawn to the things we're familiar with. In the case of the kind of threats posed in the Global Risks report, these are all risks that are commonly featured in the media, so perhaps take on added prominence in our thinking as a result.
A threat that has never been mentioned in all of the years the report has been published is the declining birth rate, yet this is a sufficient risk for Elon Musk to brand it the biggest threat to civilization at the Wall Street Journal's CEO Council Summit.
Nowhere is this more evident than in China, where the population is set to decline for the first time since the famine of 1959-1961. This is caused by a decline in fertility rate to 1.15 in 2021. They're far from alone, however, with Australia and the United States enjoying fertility rates of 1.6 and Japan at just 1.3.
While Musk's concerns have an existential element to them, in the short to medium-term, it means that our societies are going to get significantly older. That carries numerous challenges with it, but nowhere more so than in the workplace.
This is especially so in the tech sector, which has had longstanding agism issues. For instance, Facebook faced a couple of suits in 2017, Google paid out $11 million to over 200 job seekers in 2019, and IBM was involved in a civil case this year revolving around the description of older workers as "dino babies". They're far from alone, however, and data from Stack Overflow shows that the average age of developers is between 22 and 29, with less than 7% over 45.
This would be fine if we weren't in the midst of a widespread talent shortage. While the "great resignation" was driven by younger workers in its early stages, it is currently being driven by older, more tenured knowledge workers, with resignation rates among older workers growing by 34% in the last year.
It's perhaps no surprise, therefore, that UN figures suggest there will be around 30 million fewer people of working age in the world's five largest economies. That Total Jobs reveal that 80% of us are largely oblivious to this looming labor shortage does little to calm nerves either.
With societies aging this represents a real opportunity to reverse matters. For instance, research from the European Commission suggests that the "silver economy" will be worth €5.7 trillion by 2025.
Such potential also exists in terms of the workforce. Research from the International Longevity Centre highlights the strong potential for a ‘longevity dividend’ underpinned by greater productivity as we age.
This seldom converts into the public discourse however, which tends to view aging as a burden as large numbers enter retirement and stop contributing to society, whilst at the same time drawing pensions and demanding larger shares of healthcare provision. This perception is compounded by difficulties in raising the retirement age or reducing entitlements for the elderly.
Joseph Coughlin, from MIT's AgeLab, perhaps summed it up best when he said that longevity was "the greatest achievement in the history of mankind and all we can say is, is it going to bankrupt Medicare?"
To capitalize on this potential, we need to rethink what it means to age, as a report from the U.K.’s Government Office for Science so ably demonstrates.
“As the population ages, so will the U.K. workforce. The productivity and economic success of the U.K. will be increasingly tied to that of older workers,” the authors explain. “Enabling people to work for longer will help society to support growing numbers of dependents, while providing individuals with the financial and mental resources needed for increasingly long retirements.”
Such a future has numerous challenges to overcome, however. For instance, research from the University of Gothenburg highlights the stereotypes older workers face, as they're expected to have difficulty processing information, less interest in technology, and generally struggle to pick up new things.
This then feeds through into the performance of older workers, with research from Georgia State University highlighting how negative stereotypes undermine the physical and mental performance of older workers, such that the stereotypes become self-fulfilling.
It's perhaps no great surprise that research from the University of Basel finds that such an environment makes older workers feel excluded from the workforce.
Of course, these stereotypes are not founded on any kind of real evidence. Research from the IZA Institute of Labor Economics highlights, for instance, how older people are just as capable of learning new things as their younger peers. The study finds that people who are close to retiring are just as interested in learning new skills as their younger peers, even if there is no strict need for them to do so.
Similarly, there is no evidence that older people are any less creative or entrepreneurial. In fact, quite the opposite is true. accurate research from MIT and Northwestern University highlights how older entrepreneurs can often be more successful than their younger peers.
The research reveals that entrepreneurial success for the under 25s is as rare as a lesser spotted unicorn. Success rates then increase later in one’s 20s and don’t decrease even into one’s 50s. Indeed, the authors note that the average age of company founders in the United States is a veritable 41.9 years of age, with the highest-growth startups being founded by entrepreneurs with 45 years under their belt. What’s more, a 50-year-old entrepreneur was 1.8 times more likely to achieve high growth than a founder in their 30s.
What's more, research from Flinders University highlights how older workers are often crucial for surviving the kind of turbulence that we're currently experiencing.
“Mature adults demonstrate considerable resilience,” the researchers say. “The aspect of role modeling resilience is an especially important influence on younger workers. It includes mature coping strategies, emotional intelligence and empathy—and these attributes have never been more important in the workforce.”
Research from Ohio State University’s Fisher College of Business explores how organizations can encourage older workers to stick around long enough for that knowledge to be retained. The analysis found that the type of work environment was key, with autonomy, information sharing, a range of developmental opportunities, involvement in decision-making, and good compensation and benefits typifying the kind of environment that appeals to older workers.
This was built upon by a second study, from Massey Business School, which involved a survey of nearly 1,250 New Zealand workers over 55 years of age, and four key factors emerged in helping organizations retain and engage older workers:
This can only be achieved if HR departments have a profound shift in mindset, however, and begin to appreciate the tremendous asset older workers can be in the workforce of today.
"We need HR departments to realize that the greatest opportunity to grow their talent is not out of college, it's actually out of retirement," Chip Conley, founder of the Modern Elder Academy says. "We also need to recognize that if we treat older employers like their learning and development is done by the time they're 40, it's no surprise that we have older employees who are not as curious, so we need to learn to invest in long-term employees."
The older workforce can clearly be an asset, not least as we weather the storms facing us at the moment, but if we are to realize that asset, we will need to rethink our assumptions about older workers and actively work to break down the many stereotypes they face.
Over the past few years, I kept bumping into something called Hershey fonts. After digging around, I found a 1967 government report by a fellow named Dr. Allen Vincent Hershey. Back in the 1960s, he worked as a physicist for the Naval Weapons Laboratory in Dahlgren, Virginia, studying the interaction between ship hulls and water. His research was aided by the Naval Ordnance Research Calculator (NORC), which was built by IBM and was one of the fastest computers in the world when it was first installed in 1954.
The NORC’s I/O facilities, such as punched cards, magnetic tape, and line printers, were typical of the era. But the NORC also had an ultra-high-speed optical printer. This device had originally been developed by the telecommunications firm Stromberg-Carlson for the Social Security Administration in order to quickly print massive amounts of data directly upon microfilm.
Perhaps you’ve heard stories of programmers waiting impatiently for printouts from mainframe operators? Well you would have waited even longer for your optical plots. Since they used film, they required chemical processing to become photos, slides, or microfilm. But despite this wait time, the printing speed was much faster than line printers of the day: 7000 lines per minute vs 150. While this printing speed was certainly impressive, the ability to plot entire graphs and figures in just fractions of a second was no doubt well appreciated by the scientists at Dahlgren.
What made this device so fast? It was the Charactron Tube which we covered back in 2017. This special CRT has an internal metal screen into which a font is etched. The electron beam projects an entire letter on the phosphor face of the tube in one “flash”, which in turn exposes photographic film. No raster scanning or vector drawing was involved, so the process was fast. But soon the system would be utilized in ways not imagined by the original designers.
Back in those days, before
roff, LaTex, and WYSIWYG word processors, preparing technical reports full of complicated mathematical equations and data plots was quite time consuming. The text itself would be prepared using an ordinary typewriter. But special-purpose typewriters like the Varityper were needed to typeset math equations. Plots and figures would generally be hand-drawn or by pen plotter. Hershey came to the realization that the NORC’s optical printer could take on a new role and be used as a typesetter. Dr Hershey not only saw this possibility, but possessed a keen interest in calligraphy and didn’t mind spending his evenings developing this new capability.
The key to make this happen was to define a new mode of output which bypassed the internal stencil fonts. Rather, the film would be exposed by using the period (full stop) stencil as a “dot”, and moving the dot under program control. When applied to text, this is of course slower than using the stencil, but it allows an arbitrary selection of fonts, or repertories as Dr Hershey called them. Furthermore, it opens up the ability to plot data directly onto film, bypassing the slower pen-plotter and even slower hand-drawing techniques of the day.
Dr Hershey learned that engineers at the Bell Laboratories in Murray Hill, New Jersey had developed a font for their optical printer using a similar technique, approaching it from a rasterization point of view. Dr Hershey realized he could expand this to embrace more exotic and artistic glyphs. He focused on using vectors to design his fonts, and embarked on the lengthy journey of researching and building his collection of vector-based fonts.
In hindsight, he not only built a set of tools to solve the needs of the Dahlgren community, but pushed the limits of the optical plotter to the extreme. In his reports he demonstrated fonts not only in English, but in other languages such as Greek, Russian, and Japanese. In addition to mathematical symbols, he showed how the plotter could draw electronic circuit diagrams, stellar maps of the galaxy, maps in general, chemical bonds, etc. An example of his thoroughness is found in his 1967 report “Calligraphy for Computers”. Although Hershey only implemented a subset of Japanese characters as a demonstration, he searched through over 5000 of them looking for glyphs which might exceed the limitations of his method. He could only find one such case, which he reasonably decided to ignore:
With some omission of detail in tight spaces and some overflow in complicated cases, this size [a height of 21 raster units] is believed to be adequate for all characters in Nelson’s dictionary except No. 5444. Inasmuch as this character represents dragons in motion, it is of doubtful utility.
All in all, Dr. Hershey generated about 1400 western and 800 Japanese glyphs, all drawn painstakingly by hand on graph paper. There were five different optical font sizes and three different stroke types, not to mention all sorts of symbols used in mapping, science, mathematics, etc.
Hershey was trying to generate pleasing fonts for printed reports using the leading technology of the day. By far the greatest number of glyphs were complex — that is, they were constructed with multiple lines, or strokes, to give increased and variable stroke width. These strokes are often depicted today as thin lines when you search for his fonts online, but when properly drawn, taking into account the size of the SC4020 beam, solid letters will result. Today we have a plethora of fonts at our disposal, so why are Hershey fonts still being used? You might be tempted to say because they are public domain. But probably the biggest reason is the single-stroke family of fonts, which are still quite useful in so many different applications. Here are some examples I’ve encountered over the years.
Today if you want to draw text in OpenSCAD, there is built-in support. But when I was first learning OpenSCAD in early 2015, this wasn’t available. A project I was working on needed text, so I decided to make my own. As I went down the rabbit hole of simple lettering that I could implement using the graphics primitives of OpenSCAD, I discovered this is a technique with a long history.
It was during this research when I first stumbled upon Hershey fonts. I have since learned that these classical lettering styles were a source of inspiration for Dr. Hershey, including the Leroy lettering sets used by draftsmen and some comic book letterers.
At the time, I passed on Hershey fonts because they only used lines, and it seemed that real curves would look better. I made my own vector font based on these simple drafting lettering styles which used only lines and circular arcs — things I knew how to make with OpenSCAD. Looking back at this now, I see that text was integrated into OpenSCAD in March of that year. If I had just waited three months, I would’ve saved myself the time and hassle.
As a young engineer, some of my company’s projects required durable front panel markings. We made these by having the lettering and symbols engraved into the panel using a CNC machine. One well-proven way was to engrave a panel then fill in the grooves with epoxy paint. Today we can even skip the hassle of machine engraving all together. Low-cost direct laser etching provides a cleaner and often more affordable technique. Both these methods work by moving the head, a milling tool or laser beam, along paths which are defined by X-Y pairs. This is a perfect fit for vector fonts. They can be made more or less bold by changing the diameter of the milling tool or laser beam size, and can be easily scaled or rotated as needed using basic trigonometry. Trying to do this with a bit-mapped font would be awkward at best.
We’ve all put text on our PCB’s silkscreen and copper layers, but perhaps didn’t stop and think about the details. When you generate files for manufacturing, the traces and features of your board, themselves naturally vectors, are expressed in the familiar Gerber format (RS-274X). Letters are expressed in the same way.
The first photoplotters used for making PCB film artwork were made by the Gerber Scentific company in the 1960s. This device grew out of a family of large computer-controlled X-Y tables. These were originally used for tasks like cutting patterns from fabric and making prescription eyeglass lenses. The Gerber Photoplotter’s basic operation was not unlike a pen-plotter or CNC engraving machine, except that a beam of light would shine through a selectable aperture to expose photographic film. A wheel containing different aperture sizes allowed you to change the line width and also used to “flash” pads. It was only natural that vector commands were used to control the photoplotter. Rather than reinvent the wheel, Gerber defined a subset of the CNC digital interface standard RS-274D that had been around since the 1950s. With a few extensions and revisions, this is still the format we use today to convey our PCB artwork to the fabrication shop.
As in many fields, technology marches on. PCB fabrication shops don’t actually use Gerber-style photoplotters anymore. These days, more often than not, the manufacturer will convert your Gerber files so that the artwork is transferred to film using a high-resolution, high-speed raster printer. In some cases, the artwork is projected directly onto the PCB itself, entirely bypassing the film and intermediate transfer step.
That said, I don’t think we will ever send rasterized PCB artwork to the manufacturers. The features of the PCB that we send to the shop, traces and pads, are inherently vector-like in nature. And for proper results, the manufacturer needs to identify these features in order to tweak them according to their own unique manufacturing process. That’s the meaning of fabrication notes like “Line, pad, and via dimensions are specified as finished size” and “Controlled impedance traces xxxx should be 75 ohm”. Even silk-screen lettering widths may need to be adjusted depending on the process being used. Adjusting these parameters on a raster-based image, while not impossible, would be much more complicated.
I needed to put some Korean text on a PCB for a client awhile back. After discussing this on the KiCad forum I learned that within KiCad, the PCB lettering is stored internally as vectors — using the original Hershey font format. I won’t go into the gory details, but the original Hershey format is peculiar, to say the least. Hershey used only printable letters, what we could call printable ASCII today, to describe the coordinates in very compact style. There is a letter-based Cartesian grid system with the letter
R as zero. The letter
S is 1,
P is -2, etc. The letter
H would appear as
508 9G]KFK[ RYFY[ RKPYP in this notation.
I was recently playing around with Micropython on the ESP32-based TTGO module, in order to experiment with text on the tiny TFT screen. I discovered that Hacakday.io user [Russ] used Hershey fonts in his Turtle Plot Bot. This gave me a great jump start for some of my experiments, and is yet another example of finding Hershey fonts under the hood of modern projects.
CRT-based projects using vector graphics have become popular in accurate years. There are clock projects and general purpose vector displays. This is yet another application where describing fonts with vectors matches nicely with the underlying operation of the display. And you won’t be surprised to learn that Hershey fonts are commonly found on these projects. For example, this tutorial by [Trammel Hudson] on vector display basics shows how to draw Hershey font letters on the screen.
What would Dr Hershey think about his simplistic single-stroke fonts still being used over 60 years later? Considering all the multi-stroke letters and Japanese symbols that he so meticulously designed by hand, he might be a little surprised if not disappointed. Let us know if you have encountered or used Hershey’s fonts in your projects. If you want to learn more, here is an interesting presentation by Frank Grießhammer about Dr Hershey himself and the development of his fonts .