Money back guarantee of P2020-007 cheat sheet at killexams.com

With all the assistance of the particularly tested killexams.com IBM IBM Optimization Technical Mastery Test v1 cheat sheet and Practice test you may figure out just how to create your own P2020-007 knowledge. Our P2020-007 study guide are usually updated also to the purpose. The IBM P2020-007 brain dumps make your own vision tremendous plus help you extremely in planning associated with the P2020-007 exam.

Exam Code: P2020-007 Practice exam 2022 by Killexams.com team
IBM Optimization Technical Mastery Test v1
IBM Optimization tricks
Killexams : IBM Optimization tricks - BingNews https://killexams.com/pass4sure/exam-detail/P2020-007 Search results Killexams : IBM Optimization tricks - BingNews https://killexams.com/pass4sure/exam-detail/P2020-007 https://killexams.com/exam_list/IBM Killexams : How to revive your old computer

As a man of a certain age, I know that everything slows down as it gets older. But with computers, that decline can be especially precipitous. After just a couple of years, bootups can grow sluggish, apps may take longer to load, and the spinning wheel of death can become a more frequent feature of your user experience.

Eventually the frustrations become so great that people buy a new system. Sometimes that’s the right decision. Sometimes the hardware is so old (and possibly damaged) that it can’t keep up with modern software and ever-more complex websites.

But often enough, those computers don’t need to be put out to pasture. In fact, many elderly computers are still out there cranking, perhaps with inexpensive upgrades. “The 2012 MacBook Pro is probably our largest seller,” says Nick Bratskeir, owner of Flipmacs, a company that sells refurbished Macs and PCs on marketplaces like Back Market, eBay, and Swappa. And those 2012 models, with upgrades to solid-state hard drives, sell for about $150.

So what’s the magic that brings an old Mac or PC back to life? Let’s look at what goes wrong as a system ages, and how to fix it. (For a quick overview of all the tips in this article, scroll down to the end.)

Failing hardware

What goes wrong

Silicon chips can last a very long time. But all older (and some newer) computers have at least one vulnerable component: spinning hard drives. “Anything that is moving like that is susceptible to wear and tear,” says Anuj Patel, owner of Asheville, North Carolina-based repair shop Tech House. Individual sectors of the drive can also start to fail, wiping out bits of data.

Batteries present another weak spot.With age, the battery components break down, chemically or physically, not only reducing capacity but affecting the consistency of power supplied. “Even if you’re plugged in, with the bad battery, sometimes you’ll notice [the computer] goes slow,” says Patel.

What to do

The first step in reviving your computer is to check for failing hardware. Windows has a hardware-checking app called Device Manager, but it’s rather complex. Windows Central provides a great tutorial for those who want to brave it. It’s easier to at least examine the most vulnerable parts. To check the hard drive, Bratskeir recommends the Hard Disk Sentinel app (free trials available), which also repairs software errors. To check the battery on laptops, download the free Pure Battery Analytics app.

Hard Disk Sentinel provides the skinny on your drive’s health and expected lifetime. [Image: Hard Disk Sentinel]
Hardware checks are easier on Macs using the MacOS’s built in tools. Start with Apple Diagnostics. If all goes well, you can further probe the hard drive using Disk Utility, which also fixes software issues with the disk. Finally, run a check using the Battery app.
MacOS Hardware Check has a simple interface that runs when you press a key combination on system start. [Image: Sean Captain]
If your system has faulty hardware, you have some decisions to make. Replacing a motherboard is complex and pricey. But swapping in a hard drive or RAM (that is, computer memory) can be easy and inexpensive. A 500-gigabyte laptop solid state hard drive (which is faster and more durable than a spinning disk hard drive) runs about $70. If you’re not up for performing the surgery yourself, you can always go to a repair shop. Tech House, for instance, charges $139 (plus tax) for the operation. Best Buy’s Geek Squad charges a flat fee of $84.95 for physical repair work. Weigh these costs against the value of the system you are trying to repair by checking for similar refurbished models on sites like Back Market.

Even if your spinning disk is healthy, upgrading to a solid-state drive storage device (SSD) may be worthwhile for the considerable speed bump. “It’s the one [upgrade] that makes me happiest, ’cause the customer is gonna see me as a magician,” says Patel. (Just don’t go with the very cheapest drives: A colleague who purchased five refurbished MacBooks from Flipmacs encountered problems with several of the very low-cost drives the company uses, though Bratskeir says that problems could lie in a faulty cable.) But before you make that investment, check for some software problems with simple fixes.

Bulging hard drive

What goes wrong

Even a perfectly healthy hard drive can be a bottleneck if it’s too full, especially if it has spinning disks. “Older devices slow down when the disk is almost full because the device becomes so busy spinning…while trying to figure out where on the spinning disks all the different files are stored,” Carmen Zlateff, partner director of Windows User Experience at Microsoft, writes in an email. Even SSDs can bog down if they are close to full. Both Patel and Bratskeir recommend leaving about 20 gigabytes of free hard drive space.

 What to do

It’s hard to throw things away. And on an old computer with thousands of files, it’s hard to keep track of all your files in the first place . Both Windows and MacOS offer tools to help. Windows Storage Sense can automatically delete unnecessary system files and empty the recycling bin. Windows also has a built-in app, called Disk Defragmenter, that tidies up how data is organized on the disk so the drive can retrieve files more quickly.

Microsoft Storage sense helps you offload files to the cloud. [Image: Microsoft]
The MacOS Storage app offers several services, including providing a list of all files, in order of size, making it easy to find the biggest culprits.

What about those files you can’t part with? You could copy them over to an external USB drive. One-terabyte models start around $60. Some older Macs can also use the Transcend JetDrive Lite, which fits flush into the SD card slot and provides up to 1 TB of storage.

You can also move files to a cloud storage service such as Dropbox, Microsoft OneDrive, and Apple iCloud (all for Windows and Mac). Windows Storage Sense and Apple Storage offer tools to migrate files to OneDrive and iCloud, respectively. Dropbox also offers helper tools.

(If you do a hard drive upgrade, you’ll need to put copies of all personal files on an external drive or cloud service, then copy them back to the newly installed drive.)

Superfluous apps

What goes wrong

Unused apps are a good place to start your hard drive purge, but the benefits go beyond clearing space. “The sneakiest part is apps which install themselves to always run right away (when the computer is started), and to always run in the background,” says Microsoft’s Zlateff. For the apps you do keep, you might want to prevent them from starting automatically. This includes resource-hogging anti-malware software, since both Windows and MacOS have built-in malware protection.

What to do

Go through your applications folder to identify anything that you don’t use. Microsoft provides uninstall instructions for apps on Windows 10 and 11 and Windows 8. Apple provides instructions for MacOS.

[macos_delete-apps Caption: Removing apps from the Launchpad in MacOS. Credit: Apple.]

Next, see if any remaining apps are launching at startup and decide if you can disable them. (You probably wouldn’t disable a file-syncing service like Dropbox, for instance.) In Windows, go to Settings, then Startup Apps. In MacOS, go to Preferences, Users and Groups, then Login Items.

Delete apps from the Login Items in MacOS. [Image: Sean Captain]
The Windows Startup apps menu. [Image: Sean Captain]
If you are really worried about malware (say, you have clicked on a malicious link or downloaded a sketchy attachment), you can scan with a third-party application. Both Patel and Bratskeir suggest Malwarebytes (Windows and MacOS). Patel suggests running the app once to check and—if necessary—clean the system, then uninstalling the app.

Bogged-down browser

What goes wrong

“Web applications, websites, everything is continuously getting more and more resource demanding,” says Patel. “Though your computer may be running just as fast [as when it was new], if you’re running modern apps and going to websites of 2022, it’s still got to process all that info.” Browsers get further bogged down by extensions, including helpers for online gaming, shopping, news reading, security, and customizing the look of your browser. “Definitely extensions will slow you down, to the Dickens,” says Bratskeir.

What to do

Animated ads can be a serious resource drain. A blocker such as AdblockPlus will lighten the load. (You can disable the blocking for any ad-supported sites you want to support.) You can also adjust settings in Firefox and Safari to prevent videos from autoplaying. On Chrome, use the AutoplayStopper extension.

You can further reduce resource drain by limiting how many browser windows and tabs you keep open. (See Fast Company‘s Chrome speed-up guide for more tips on browser optimization.)

While some extensions–like those I just mentioned–are handy, it’s good to periodically remove the ones you aren’t using. Here are instructions for Chrome, Safari, and Firefox.

Disabling or removing extensions in Firefox. [Image: Sean Captain]

Aging operating system

What goes wrong

All operating systems contain some bugs, including existing or newly discovered security vulnerabilities. Sticking with old software leaves you exposed. Newer operating systems may also utilize the hardware more efficiently, but that’s contentious. Daryn Swanson, a technology consultant, was skeptical. “The further you upgrade from the OS that was shipped with the hardware the less likely it becomes that your hardware is included in QA testing, which is why older systems eventually become slow,” he says. Since the course is controversial, a bit involved, and sometimes hard to undo, I saved it as a final step to consider taking.

What to do

What’s not controversial is patching the operating system you already have. Apple, for instance, currently provides updates for the latest three versions, back to Catalina. Microsoft provides updates back to Windows 8.1. “I would say that the oldest version of an OS you should run is the oldest supported version,” says Swanson.

You can find instructions for downloading and installing updates on both the Microsoft (Windows 10 and 11) and Apple sites.

Use Windows update to download and install patches for your current OS. [Image: Microsoft]
If patching doesn’t help, it may be time to try an upgrading to an entirely new operating system.

Microsoft specifies minimum hardware requirements for upgrades to Windows 10 and Windows 11 (And Lifewire describes how to look up your PC’s specs on various versions of Windows.)

Apple has a handy guide that describes the highest version of the OS it recommends for each model. MacOS Catalina, which is two versions old, supports systems all the way back to 2012. (I’ve seen it running smoothly on those systems, provided they have an SSD upgrade.)

MacOS alerts you to eligible operating system updates. [Image: Apple]
On important distinction: While all MacOS upgrades are free, upgrading from Windows 8 or earlier to 10 or 11 typically costs $139. (But upgrading from Windows 10 to 11 is free.)

If the upgrade doesn’t help, or even hurts, there are ways to go back. Windows 11 has a simple rollback feature. See “Go back to your previous version of Windows” on this troubleshooting page. With Macs, the best method is to restore your old OS from a Time Machine backup. If you didn’t make a backup, there are some harder ways to roll back.

Is it worth fixing?

As software and websites get ever-more taxing, even the best computers fall behind. If you demand a lot of your system, such as video editing or gaming, you’re probably going to buy a new computer at least every four years. But if you are a casual user—surfing, emailing, and writing—you may be able to hang onto an older system for quite a while.

Since a lot of troubleshooting is free, it’s worth taking an afternoon to try out the easy fixes. If hardware is declining, though, you have some tougher decisions. It’s definitely worth checking out the going price for a similar refurbished system to make sure that the hardware repair (or pricey Windows upgrade) isn’t more expensive than a replacement computer. The best reason to revive your old computer, after all, is to save money.

TL;DR: 6 ways to renew your computer

1. Check for hardware defects

In Windows, use Hard Disk Sentinel and Pure Battery Analytics. In MacOS, use Apple Diagnostics, Disk Utility, and the Battery app.

2. Free up at least 20GB of hard drive space

Get help from the Storage Sense app in Windows and the Storage app in MacOS. Move files you can’t part with to an external drive or cloud service like Dropbox, OneDrive, or iCloud. On Windows, you can also tidy up the drive with Disk Defragmenter.

3. Remove apps and disable auto start

Uninstall apps in Windows 10 and 11, Windows 8, or MacOS. Remove Startup Apps in Windows and Login Items in MacOS.

4. Streamline your browser

Stop resource-hogging advertisements with plugins like AdblockPlus and disable autoplaying videos in Chrome, Safari, and Firefox. Remove superfluous extensions in Chrome, Safari, and Firefox.

5. Refresh your operating system

Update your current version in Windows and MacOS. If this doesn’t help, upgrade to a newer version from Microsoft or Apple.

6. Consider a hard drive upgrade

If none of these steps help your system, and it’s equipped with a spinning hard drive, look into the cost of upgrading to a solid-state drive (including the expense and/or hassle of reinstalling all your apps and files). To see if it’s worthwhile, compare the estimate for the upgrade to the cost of a similar refurbished model on sites like Back Market.

This article has been updated with additional input from Flipmacs.

Thu, 28 Jul 2022 21:00:00 -0500 en-US text/html https://www.fastcompany.com/90773755/how-to-revive-your-old-computer
Killexams : Linux Fu: WSL Tricks Blur The Windows/Linux Line

We have to admit, we have an odd fascination with WSL — the Windows subsystem for Linux. On the one hand, it gives us more options on Windows 10 for running the software we love. On the other hand, we wonder why we aren’t just running Linux. Sometimes it is because our cool laptop doesn’t work well on Linux. Other times we are using someone else’s computer that we aren’t allowed to reload or dual boot. Still, as long as we have to use Windows, we are glad to have WSL. A recent blog post by [Hanselman] shows some very cool tricks for using WSL that make it even better.

Exploring WSL

Did you know you can use WSL to run Linux commands in a Windows command shell? For example, you have a long directory and you want to run grep:

dir c:\archive\* | wsl grep -i hackaday

Of course, from bash you could access the same directory:

ls /mnt/c/archive | grep -i hackaday

Extensions

Many of the tricks rely on the fact that bash doesn’t assume any executable file extension. If you try to run explorer, for example, from a bash shell, nothing happens. But if you append the .exe extensions, Windows programs will run and, by default, the usual Windows directories are in the path.

You do need to watch out for path name conversion. For example, if you provide “.” as an argument to explorer, you will open up a network share //wsl$/Ubuntu/home/user_name, for example. Of course, that’s another trick. You can access your WSL directories from windows using that notation (obviously, Ubuntu and user_name may be different for your installation). However, ordinary paths do not work.

Path Conversion

You can, however, use the wslpath utility to convert paths in both directions:

$ wslpath
Usage:
-a force result to absolute path format
-u translate from a Windows path to a WSL path (default)
-w translate from a WSL path to a Windows path
-m translate from a WSL path to a Windows path, with '/' instead of '\'

For example:

$ explorer.exe `wslpath -w /bin`

X11 and More

[Hanselman] discusses a number of tips, including some about using development tools and git. You can also install multiple WSL flavors and export them to other Windows machines. He also mentions running X11 using paid tools Pengwin and X410. We say just use Swan.

Speaking of Swan, it is a great alternative to WSL on any Windows version, not just Windows 10. In truth, it is just Cygwin with X11 preconfigured, but it is much easier than trying to get X11 running on a bare Cygwin install. On the one hand, this is a much more desktop Linux solution than WSL. On the other hand, WSL loads real distributions and integrates nicely with Windows 10. But if you load both, you can get the advantages of both, too.

Sell Out?

Given the choice, we’ll just use Linux. Honestly, if your workflow is mostly Web-based, it hardly matters anymore. You load Chrome or your choice of browser and everything works. Of course, our Linux boxes tend to be way more efficient and also stay running better than Windows.

However, if you find yourself using Windows, Cygwin has long been a big help. Now WSL is another tool to get your Linux tools on a Microsoft-controlled box.

Tue, 26 Jul 2022 12:00:00 -0500 Al Williams en-US text/html https://hackaday.com/2019/12/23/linux-fu-wsl-tricks-blur-the-windows-linux-line/
Killexams : Dealing With System-Level Power

Analyzing and managing power at the system level is becoming more difficult and more important—and slow to catch on.

There are several reasons for this. First, design automation tools have lagged behind an understanding of what needs to be done. Second, modeling languages and standards are still in flux, and what exists today is considered inadequate. And third, while system-level power has been a growing concern, particularly at advanced nodes and for an increasing number of mobile devices that are being connected to the Internet, many chipmakers are just now beginning to wrestle with complex power management schemes.

On the tools front, some progress has been made recently.

“It might not be 100% there yet, but the tools are now starting to become available,” said Rob Knoth, product management director for Cadence‘s Digital & Signoff Group. “So we’re at a bit of an inflection point where maybe a year or five years from now we’ll look back and see this is about the time when programmers started moving from the, ‘’Hey, we really need to be doing something about this’ stage into the, ‘We are doing something about it’ mode.”,

Knoth pointed to technologies such as high-level synthesis, hardware emulation, and more accurate power estimation all being coupled together, combined with the ability to feed data from the software workloads directly all the way through silicon design to PCB design to knit the whole system together.

There has been progress in the high-level synthesis area, as well, in part because engineering teams have new algorithms and they want to be able to find out the power of that algorithm.

“It’s no longer acceptable to just look at an old design and try to figure it out,” said Ellie Burns, product manager of the Calypto Systems Division at Mentor, a Siemens Business. “It doesn’t really work very well anymore. So you have to be able to say, ‘I want to experiment with an algorithm. What power does it have during implementation?’”

This can mean running the design through to implementation as quickly as possible to determine power numbers. “Power is most accurate down at the gate level,” Burns said. “We’re a million miles from that, so what do you do? We’ve also seen some applications of machine learning where you start to learn from the gate-level netlist, etc., and can begin to store that and apply that from emulation.”

All of these techniques and others are becoming important at 10/7nm, where dynamic current density has become problematic, and even at older nodes where systems are required to do more processing at the same or lower power.

“Part of this is optimizing signal integrity,” said Tobias Bjerregaard, CEO of Teklatech. “Part of it is to extend timing. What’s needed is a holistic approach, because you need to understand how power affects everything at the same time. If you’re looking at power integrity and timing, you made need to optimize bulk timing. This is not just a simple fix. You want to take whatever headroom is available and exploit what’s there so that you can make designs easier to work with.”

Bjerregaard said these system issues are present at every process node, but they get worse as the nodes shrink. “Timing, routability and power density issues go up at each new node, and that affects bulk timing and dynamic voltage drop, which makes it harder to close a design and achieve profitability.”

PPA
Design teams have always focused on power/performance/area triumvirate, but at the system level power remains the biggest unsolved problem. Andy Ladd, CEO of Baum, said virtual platform approaches try to bring performance analysis to the system level, but power is not there yet.

“Power is all back-end loaded down at the gates and transistor level, and something needs to shift left,” Baum said. “For this we need a faster technology. A lot of the tools today just run a piece of the design, or a segment of a scenario. They can’t run the whole thing. If you really want to optimize power at the system level, you have to include the software or realistic scenarios so the developers know how that device is going to run in a real application. Something needs to change. The technology has got to get faster, and you still have to have that accuracy so that the user is going to have confidence that what they are seeing is good. But it has to change.”

Graham Bell, vice president of marketing at Uniquify, agreed there is a real gap at the system level. “We don’t see solutions that really understand the whole hierarchy from application payloads, all the different power states that each of the units or blocks inside the design, whether they are CPUs or GPUs or other special memory interfaces. All of these things have different power states, but there is no global management of that. So there needs to be some work done in the area of modeling, and there needs to be some work done in the area of standards.”

The IEEE has been actively pushing along these lines for at least the last few years but progress has been slow.

“There have been some initial efforts there but certainly instead of being reactive, which a lot of solutions are today, you really want to have a more proactive approach to power management,” Bell said.

The reactive approach is largely about tweaking gates. “You’re dealing with the 5% to 10% of power,” said Cadence’s Knoth. “You’re not dealing with the 80% you get when you’re dealing at the algorithm level, at the software level, at the system level — and that’s really why power is really the last frontier of PPA. It requires the entire spectrum. You need the accuracy at the silicon and gate level, but yet you need the knowledge and the applications to truly get everything. You can’t just say, ‘Pretend everything is switching at 25%,’ because then you are chasing ghosts.”

Speaking the same language
One of the underlying issues involves modeling languages. There are several different proposals for modeling languages, but languages by themselves are not enough.

“I look at some of those modeling languages that look at scenarios, and they are great, but where do they get their data from?” asked Mentor’s Burns. “That seems to be the problem. We need a way to take that, which is good for the software, but you need to bring in almost gate-level accuracy.”

At the same time, it has to be a path to implementation, Ladd said. “You can’t create models and then throw them away, and then implement something else. That’s not a good path. You’ve got to have an implementation path where you are modeling the power, and that’s going to evolve into what you’re implementing.”

Consistent algorithms could be helpful in this regard, with knobs that helps the design team take it from the high-level down to the gate level.

“The algorithm itself needs to be consistent,” said Knoth. “Timing optimization, power measurement — if you’re using the same algorithms at the high level as well as at the gate level, that gives the correlation. We’re finally at the point where we’ve got enough horsepower that you can do things like incredibly fast synthesis, incredibly large capacities, run genuine software emulation workloads, and then be able to harvest that.”

Still, to be able to harvest those, the data vectors are gigantic. As a result, the gate-level netlist for SoC power level estimation is not practical. The data must somehow be extracted because it’s tough enough to get within 15% accuracy in RTL, let alone bringing that all the way back up to the algorithm.

Increasingly at smaller geometries, thermal is also a consideration that cannot be left out of the equation.

Baum’s Ladd noted once the power is understood, then thermal can be understood.

“This is exactly why we’ve all been chasing power so much,” Knoth said. “If you don’t understand the power, thermal is just a fool’s errand. But once you understand the power, then you understand how that’s physically spread out in the die. And then you understand how that’s going to impact the package, the board, and you understand the full, system-level componentry of it. Without the power, you can’t even start getting into that. Otherwise you’re back into just making guesses.”

Fitting the design to the power budget
While power has long been a gating factor in semiconductor design, understanding the impact at the system level has been less clear. This is changing for several reasons:

• Margin is no longer an acceptable solution at advanced nodes, because the extra circuitry can impact total power and performance;
• Systems companies are doing more in-house chip design in complex systems, and
• More IP is being reused in all of those designs, and chipmakers are choosing IP partly on the basis of total system power.

Burns has observed a trend whereby users are saying, “‘This is my power budget, how much performance can I get for that power budget?’ I need to be pretty accurate because I’m trying to squeeze every bit of juice out. This is my limit, so the levels of accuracy at the system level have to be really really high.”

This requires some advanced tooling, but it also may require foundry models because what happens in a particular foundry process may be different than what a tool predicts.

“If an IP vendor can provide power models, just like performance models, that would benefit everybody,” said Ladd. “If I’m creating an SoC and I’m creating all these blocks and I had power models of those, that would be great because then I can analyze it. And when I develop my own piece of IP later, I can develop a power model for that. However, today, so much of the SoC is already made in third party IP. There should be models for that.”

UPF has been touted as the solution to this, but it doesn’t go far enough. Some vendors tout hardware-based emulation as the only solution to this in order to fully describe the functionality.

“You need the activity all together, throughout the design,” said Burns. “That’s the difficult part. If you had the model on the UPF side, we need that. But then how do we take how many millions of vectors in order to get real system-level activity, and maybe different profiles for the IP that we could deliver?”

Knoth maintained that if the design team is working at a low enough granularity, they are dealing with gates. “UPF for something like an inverter, flip flop or even a ROM is fine, but when you abstract up to an ARM core level or something like that, suddenly you need a much more complex model than what UPF can provide you.”

While the UPF debate is far from over, Bell recognized there really is a gap in terms of being able to do the system-level modeling. “We’re really trying to do a lot of predictive work with the virtual prototyping and hardware emulation, but we’re still a long way away from actually doing the analysis when the system is running, and doing it predictively. We hear, ‘We’ll kind of build the system, and see if all of our prototyping actually plays out correctly when we actually build the systems.’ We’ve played with dynamic voltage and frequency scaling, we do some of the easy things, or big.LITTLE schemes that we see from ARM and other vendors, but we need to do a lot more to bring together the whole power hierarchy from top to bottom so we understand all of the different power contributors and power users in the design.”

Further, he asserted that these problems must be solved as there is more low power IP appearing in the marketplace, such as for DDR memories.

“We’re moving to low power schemes, we’re moving to lower voltage schemes, and what we’re trying to do with a lot of that IP is to reduce the low power footprint. The piece that designers need to struggle with is what happens with their ability to have noise immunity and have reliability in the system. As we push to lower power in the system, we’re reducing voltages, and then we are reducing noise margins. Somehow we have to analyze that and, ideally, in the genuine running design somehow predictably adjust the performance of the design to work with genuine operating conditions. When you power up with an Intel processor, it actually sets the supply voltage for the processor. It will bump it up and down a certain number of millivolts. That kind of dynamic tuning of designs is also going to have to be a key feature in terms of power use and power management,” he said.

Related Stories
Transient Power Problems Rising
At 10/7nm, power management becomes much more difficult; old tricks don’t work.
Power Challenges At 10nm And Below
Dynamic power density and rising leakage power becoming more problematic at each new node.
Closing The Loop On Power Optimization
Minimizing power consumption for a given amount of work is a complex problem that spans many aspects of the design flow. How close can we get to achieving the optimum?
Toward Real-World Power Analysis
Emulation adds new capabilities that were not possible with simulation.


Sat, 16 Jul 2022 12:00:00 -0500 en-US text/html https://semiengineering.com/dealing-power-system-level/
Killexams : C++20 Is Feature Complete; Here’s What Changes Are Coming

If you have an opinion about C++, chances are you either love it for its extensiveness and versatility, or you hate it for its bloated complexity and would rather stick to alternative languages on both sides of the spectrum. Either way, here’s your chance to form a new opinion about the language. The C++ standard committee has recently gathered to work on finalizing the language standard’s newest revision, C++20, deciding on all the new features that will come to C++’s next major release.

After C++17, this will be the sixth revision of the C++ standard, and the language has come a long way from its “being a superset of C” times. Frankly, when it comes to loving or hating the language, I haven’t fully made up my own mind about it yet. My biggest issue with it is that “programming in C++” can just mean so many different things nowadays, from a trivial “C with classes” style to writing code that will make Perl look like prose. C++ has become such a feature-rich and downright overwhelming language over all these years, and with all the additions coming with C++20, things won’t get easier. Although, they also won’t get harder. Well, at least not necessarily. I guess? Well, it’s complex, but that’s simply the nature of the language.

Anyway, the list of new features is long, combining all the specification proposals is even longer, and each and every one of these additions could fill its own, full-blown article. But to get a rough idea about what’s going to come to C++ next year, let’s have a condensed look at some of these major new features, changes, and additions that will await us in C++20. From better type checking and compiler errors messages to Python-like string handling and plans to replace the #include system, there’s a lot at play here!

Making Things Safer

As a language, being more liberal and less restrictive on implementation details provides great flexibility for developers — along with a lot of potential for misunderstandings that are bound to result in bugs somewhere further down the road. It is to this day the biggest asset and weakness of C, and C++ still has enough similarities in its roots to follow along with it. Restrictions can surely help here, but adding restrictions tends to be an unpopular option. The good thing is, C++ has compromises in place that leave the flexibility on the language level, and adds the restrictions at the developer’s own discretion.

Compiler Advisory: Explicit Constants

Back in C++11, the constexpr keyword was introduced as an addition to a regular const declaration, defining a constant expression that can be evaluated at compile time. This opens up plenty of optimization opportunities for the compiler, but also makes it possible to declare that, for example, a function will return a constant value. That helps to more clearly show a function’s intent, avoiding some potential headaches in the future. Take the following example:

int foo() {
    return 123;
}

constexpr int bar() {
    return 123;
}

const int first = foo();
const int second = bar();

While there is technically no difference between these two functions, and either one will return a constant value that will be valid to assign to a const variable, bar() will make this fact explicitly clear. In the case of foo(), it’s really more of a coincidental side effect, and without full context, it is not obvious that the function’s return value is supposed to be a constant. Using constexpr eliminates any doubt here and avoids possible accidental side effects, which will make the code more stable in the long run.

Having already been in place for a while, constexpr has seen a few improvements over the years, and will see some more with C++20, especially in terms of removing previously existing limitations on their usage. Most the new standard allows virtual constexpr functions, developers can use try / catch inside constexpr (provided no exceptions are thrown from within), and it’s possible to change members inside of a union.

On top of that, both std::string and std::vector as well as a bunch of other previously missing places in the standard library will fully utilize constexpr. Oh, and if you want to check if a piece of code is actually executed within a constant evaluation, you will be able to do so using std::is_constant_evaluated() which returns a boolean value accordingly.

Note that constexpr code states that it can be evaluated at compile time and is therefore a valid constant expression, but it doesn’t necessarily have to, nor is it guaranteed that the evaluation will happen at compile time, but could be postponed to run time. This is mainly relevant for compiler optimization though and doesn’t affect the program’s behavior, but also shows that constexpr is primarily an intention marker.

constexpr int foo(int factor) {
    return 123 * factor;
}

const int const_factor = 10;
int non_const_factor = 20;

const int first = foo(const_factor);
const int second = foo(non_const_factor);

Here, first will be evaluated at compile time as all expressions and values involved are constants and as such known at compile time, while second will be evaluated at run time since non_const_factor itself is not a constant. It doesn’t change the fact though that foo() is still going to return a constant value, the compiler just can’t be sure yet which exact value that may be. To make sure the compiler will know the value, C++20 introduces the consteval keyword to declare a function as an immediate function. Declaring foo() as consteval instead of constexpr will now indeed cause an error. In fact, immediate functions are really only known at compile time, and as a consequence this turns the consteval functions into an alternative for macro functions.

At the other end of the constant expression verification strictness is the new constinit keyword that is mainly telling the compiler that an object will be statically initialized with a constant value. If you are familiar with the static initialization order fiasco, this is an attempt to solve the issue.

But constant expressions aren’t the only C++20 changes aimed at improving compile time validation, and the stability that comes with it.

The Concept Of Concepts

While technically not a completely new thing, Concepts have graduated from being an experimental feature to a full-fledged part of the language standard, allowing the addition of semantic constraints to templates, and ultimately making generic programming a hint more specific.

Somewhat related to type traits, Concepts make sure that data used within a template fulfill a specified set of criteria, and verifies this at the beginning of the compilation process. So as an example, instead of checking that an object is_integral, an object of type Integral is used. As a result, the compiler can provide a short and meaningful error message if the defined requirement of a concept isn’t met, instead of dumping walls of errors and warnings from somewhere deep within the template code itself that won’t make much sense without digging further into that code.

Apart from letting the compiler know what data is needed, it also shows rather clearly to other developers what data is expected, helping to avoid error messages in the first place, and avoids misunderstandings that lead to bugs later on. Going the other direction, Concepts can also be used to constrain the return type of template functions, limiting variables to a Concept rather than a generic auto type, which can be considered at C++’s void * return type.

Some basic Concepts will be provided in the standard library, and if you don’t want to wait for updated compilers, GCC has the experimental Concepts implemented since version 6 and you can enable them with the -fconcepts command line parameter. Note that in the initial draft and current reference documentation, Concept names were defined using CamelCase, but they will be changed to snake_case to preserve consistency with all other standard identifiers.

Ranges Are The New Iterators

Ranges are essentially iterators that cover a sequence of values in collections such as lists or vectors, but instead of constantly dragging the beginning and end of the iterator around, ranges just keep them around internally.

Just as Concepts, Ranges have also moved from experimental state to the language standard in C++20, which isn’t much of a coincidence as Ranges depend on Concepts and uses them to Improve the old iterator handling by making it possible to add constraints to the handled values, with the same benefits. On top of constraining value types, Ranges introduce Views as a special form of a range, which allows data manipulation or filtering on a range, returning a modified version of the initial range’s data as yet another range. This allows them to be chained together. Say you have a vector of integers and you want to retrieve all even values in their squared form — ranges and views can get you there.

With all of these changes, the compiler will be of a lot more assistance for type checking and will present more useful error messages.

String Formatting

Speaking of error messages, or well, output in general, following the proposal of its author the libfmt library will be integrated into the language standard as std::format. Essentially this provides Python’s string formatting functionality! Compared to the whole clumsiness of the cout shifting business, and the fact that using printf() in the context of C++ just feeling somewhat wrong, this is definitely a welcomed addition.

While the Python style formatting offers pretty much the same functionality as printf(), just in a different format string syntax, it eliminates a few redundancies and offers some useful additions, such as binary integer representation, and centered output with or without fill characters. However, the biggest advantage is the possibility to define formatting rules for custom types, on the surface this is like Python’s __str__() or Java’s toString() methods, but it also adds custom formatting types along the way.

Take strftime() as example — albeit it this is a C function, which behaves as snprintf(), the difference is that it defines custom, time-specific conversion characters for its format string, and expects struct tm as argument. With the right implementation, std::format could be extended to behave just like that, which is in fact what the upcoming addition to the std::chrono library is going to do.

Source Location

While we’re on the subject of nicely formatting in convenient ways, another experimental feature coming to C++20 is the source_location functionality, providing convenient access to the file name, line number, or function name from the current call context. In combination with std::format, a prime candidate for implementing a custom logging function, and practically a modern alternative to preprocessor macros like __FILE__ and __LINE__.

Modules

It appears that slowly eliminating use of the preprocessor is a long-term goal in the future of C++, with consteval essentially replacing macro functions, source_location obsoleting one of the most commonly used macros, and on top of all that: modules, a new way to split up source code that aims to eventually replace the whole #include system.

While some say it’s long overdue, others see the addition of modules at this point rather critical, and some developers have stated their concerns about the current state. Whatever your own opinion is on the subject, it’s safe that say that this is a major change to the whole essence of the language, but at the same time a complex enough undertaking that won’t just happen over night. Time will tell where modules will actually end up. If you’re curious and want to have a look at it already, both GCC and Clang already have module support to some extent.

But wait, there is more!

Everything Else

The list just goes on, with Coroutines as one more major feature that will be added to C++20.

As for all the rest, there will be

Don’t worry though, the parts that really matter to be volatile won’t change.

So all in all, a lot is coming to C++, and some features are sure worthy to be excited about.

Of course, some of these new features and extensions have been around in other languages for ages, if not even from the beginning. It’s interesting to see though how some of these languages that were once influenced by C++ are now influencing the very future of C++ itself.

Fri, 05 Aug 2022 12:00:00 -0500 Sven Gregori en-US text/html https://hackaday.com/2019/07/30/c20-is-feature-complete-heres-what-changes-are-coming/
Killexams : How To Better Align B2B Lead Generation And Prospect Development

Business-to-business (B2B) lead generation and prospect development are key skills that can drive superior sales growth. They are closely interconnected, as prospecting is an essential part of lead generation. But this link isn't always obvious to sales and marketing professionals.

The lack of understanding of their interdependence tends to complicate the relationships between sales and marketing team members. Forrester Research states that “if marketing and sales are not aligned and they do not collaborate, they will be disintermediated,” resulting in lost sales or costly delays.

Aligning B2B lead generation and prospect development requires strong cooperation, reliability and trust. Both sales and marketing teams need to work together to develop a set of specific abilities, customer insight, knowledge and practices that help launch campaigns, nurture leads, collect and analyze data, and convert prospects to new customers.

Building An Analytical Foundation 

Big data has become a game-changing factor for sales and marketing in companies across the globe. Data is penetrating all spheres of business, and the success of the entire lead generation process now depends on the competence of the team.

Competence in big data analysis is a prerequisite to work with various data types and disparate data sources to establish an efficient lead generation funnel. A successful lead generation strategy requires a high level of personalization, which in turn involves the processing of massive amounts of data (often from diverse sources).

  • Legacy/Disparate Data Sources 

According to research by Econsultancy and IBM, most companies use only a small portion of data collected for personalization, yet still struggle to unify this data.

Disparate legacy systems and data sources require the accurate analysis, cleaning, comparing, blending and filtering of data before it is ready for migration. Legacy data sources are usually related to data quality issues, and the overlapping of datasets, duplication of data and missing data are frequent occurrences. Therefore, the integration of such data requires well-established, precise steps and complex programming. Patience and scrupulousness are major keys to success when dealing with disparate data.

A key reason legacy systems tend to be a burden for both sales and marketing teams is that they appear to be outdated. Due to the exponential development of digital tools and techniques, a gap often arises between the system adopted earlier and the data gained now. Owning an outdated legacy system can result not only in sales and marketing complications, but in errors, bugs and even unprofitable decisions. Moreover, many challenges can be avoided by modernizing legacy systems, including:

Complications with system integration and scalability.

Security threats.

Slow or poor performance.

Dependence on disconnected and siloed data and tools.

Hardware and device dependency.

Additional costs.

Storing data within in-house data centers seems to be an outdated solution and cumbersome to manage. The present challenge of data management has shifted to the integration of accumulated in-house data with data gained and stored in external sources.

Combining these two categories can prove to be a huge success. Industries face the need to answer complex questions; thus, data integration is a must.

Getting A Big Impact From Big Data 

Building a sustainable and profitable business largely depends on the ability to translate vast amounts of data into actionable insights. These insights are valuable for many directions of enterprise activity, especially for marketing and sales.

The need to integrate data from various sources and apply it to segment a target audience has stipulated the appearance of marketing data management systems (MDMS). Introducing such a platform allows highly personalized targeting across multiple channels. Moreover, the customer engagement road map also sees improvement through this approach. The secret to successful marketing campaigns is getting a significant impact through big data.

Despite all of the technological advancements and improvements, marketing and sales departments still struggle to align their efforts and optimize results, driving additional sales by converting more of the “right” prospects to customers. Applying a stable and well-tuned B2B prospecting platform can uncover opportunities such as:

• Customer insights application for cross-sales.

• Improvement in a personalized experience.

• Optimization of order processes.

• Better content management (taking into account a diverse range of devices and channels).

• Growth in lead nurturing.

Building Digital Lead Development Ecosystems

A digital ecosystem is centered around a company's website and is fueled by high-quality content. Building unique and highly efficient lead development ecosystems will help you leverage this channel effectively.

Digital lead development ecosystems take responsibility for both lead creation and nurturing. In other words, the ecosystem works on lead development and hands over these leads to professional salespeople, considering the demographic and behavioral data, to close the deal. The interplay between smart marketing automation tools, customer relationship management (CRM), various channels, data and a website creates a flourishing ecosystem for lead development.

Conclusion

With so many analytical tools on the market today, each with its own nuances, consolidating these functionalities into a single platform can help marketing and sales gain full advantage of the B2B lead generation and prospect development potential for accelerating business growth.

Readiness to take risks and apply new approaches and techniques can be highly rewarding. Developing the ability to combine data gained from legacy/disparate sources with in-house data can help companies refine and continually Improve sales and marketing return on investment (ROI) and results.

Modernization and improvement should not be regarded as a one-time activity for those attempting to get more sales. To keep up with the market dynamics and rapidly developing technologies, companies should focus on turning their data insights into real profits. The most direct path to achieve this is efficiently utilizing the sales funnel.

In conclusion, important steps to growth include building an analytics foundation, modernizing legacy systems, applying big data in strategy development, adopting micro-market orientation and implementing a consolidated platform. These are ongoing tasks that require continual improvement in order to be prepared for the constantly changing demand.

Mon, 04 May 2020 23:58:00 -0500 Daniel Hussem en text/html https://www.forbes.com/sites/forbescommunicationscouncil/2020/05/05/how-to-better-align-b2b-lead-generation-and-prospect-development/
Killexams : Distilling The Essence Of Four DAC Keynotes

Chip design and verification are facing a growing number of challenges. How they will be solved — particularly with the addition of machine learning — is a major question for the EDA industry, and it was a common theme among four keynote speakers at this month’s Design Automation Conference.

DAC has returned as a live event, and this year’s keynotes involved the leaders of a systems company, an EDA vendor, a startup, and a university professor.

Mark Papermaster, CTO and executive vice president for technology and engineering at AMD

Mark Papermaster, CTO and executive vice president for technology and engineering at AMD (Photo: Semiconductor Engineering/Jesse Allen

Papermaster began his talk with an observation. “There has never been a more exciting time in technology and computation. We are facing a massive inflection point. The combination of exploding amounts of data and more effective analysis techniques that we see in new AI algorithms means that to put all that data to work has create an insatiable demand for computation. We have relied on Moore’s Law for 30 of my 40 years in the industry. I could count on a dramatic improvements every 18 months, with lowering the cost of the devices and gains in density and performance with each process node. But as the industry has moved into these minute lithographies, the complexity of manufacturing has grown tremendously. It is obvious that Moore’s Law has slowed. Costs go up with each node. The number of masks is rising, and while we are getting density gains, they are not getting the same scaling factors that we once did or the same performance improvements. There will be a metamorphosis of how we approach the next generation of devices.”

Papermaster noted that embedded devices are becoming pervasive, and they are getting smarter. The demand for computation, driven by AI, is going up everywhere, which requires new approaches to accelerate improvements. “Experts predict that by 2025, the amount of machine-generated data will exceed the data generated by humans. That drives change in how we think about computation. It makes us think about new ways to put accelerators into devices as chips or chiplets. We must take on the challenges collectively as an industry, and that is the metamorphosis that has me excited. That is how we will overcome the challenges.”

One of the big issues involves reticle limitations, which determine how much can be crammed onto a monolithic piece of silicon. Papermaster said this will lead to more design, and more design automation, and that can only come about through collaboration and partnerships. The solutions will rely on heterogeneity and how to handle complexity. Software needs to be designed along with the hardware in a “shift left” manner. “The 225X gain in transistor count over the past decade means we are now looking at designs with 146 billion transistors, and we have to deploy chiplets.”

Fig. 1: Ecosystems created through partnership. Source: AMD (based on Needham & Co. data)

Fig. 1: Ecosystems created through partnership. Source: AMD (based on Needham & Co. data)

This is not a new idea, however. “If we look back to the first DAC in 1964, it was created as Society to Help Avoid Redundant Effort (SHARE). That acronym is very prescriptive of what we need right now. We need a shared vision of the problem we are solving,” he said.

Put simply, solving problems the industry now faces cannot be done by any single company, and a lot of the innovation happens at the overlap of partnerships.

Fig. 2: Percentage of gains from scaling. Source AMD

Fig. 2: Percentage of gains from scaling. Source AMD

At 3nm, design technology co-optimization (DTCO) is expected to overtake intrinsic scaling. The trends are a challenge to EDA, the application developers, and to the design community. In order to solve the problems, the solution platform needs to be re-architected, particularly for AI. That brings engines and interconnects together with chiplets, up through software stack layers, to create the platform. Engines are becoming more specific, and domain-specific accelerators are needed for an increasing number of tasks.

Fig 3. Platform approach to problem solving. Source: AMD

Fig 3. Platform approach to problem solving. Source: AMD

“In the next era of chiplets, we will see multiple combinations of 2D and 3D approaches, and partitioning for performance and power will open up new design possibilities. This will create incredible opportunities for EDA, and you will have to rethink many things from the past. We also have to do this sustainably and think more about power. IT computation is on a trajectory to consume all available energy, and we have to cap it now.”

Papermaster called upon Aart deGeus, chairman and CEO for Synopsys, to talk about the sustainability of computing.

DeGeus focused on the exponential of Moore’s Law, overlaid with the exponential of CO2 emissions. “The fact that these two curves fit almost exactly should be very scary to all of us,” he said. “Our objective is clear. We have to Improve performance per watt by 100X this decade. We need breakthroughs in energy generation, distribution, storage, utilization, and optimization. The call to action — he or she who has the brains to understand should have the heart to help. You should have the courage to act. I support this message from our sponsor, planet Earth.”

Papermaster followed up saying that AMD has a 30X by 2025 power efficiency goal, exceeding the industry goal by 2.5X. He said AMD is on track and currently has achieved a 7X improvement. “If the whole industry was to take on this goal, it would save 51 billion KW of energy over 10 years, $6.2B in energy costs, and drive down CO2 emissions equivalent to 600 million tree seedlings.”

Papermaster added that AI is at the point of transformation for the design automation industry. “It touches almost every aspect of our activities today,” he said, pointing out that various technologies such as emulation, digital twins, generative design, and design optimization are use cases that are driving EDA. “We are using AI to help Improve the quality of results, to explore the design space and Improve productivity.”

He also provided one example where packaging can help. By stacking cache on top of logic, AMD could achieve 66% faster RTL simulation.

Anirudh Devgan, president and CEO of Cadence

Anirudh Devgan, president and CEO of Cadence

Devgan’s presentation was entitled, “Computational Software and the Future of Intelligent Electronic Systems,” which he defines as computer science plus math, noting this is the underpinning of EDA.

“EDA has done this for a long time, since the late ’60s to early ’70s,” Devgan said. “Computational software has been applied to semiconductors and is still going strong, but I believe it can be applied to a lot of other things, including electronic systems. The last 10 years has been big in software, especially in social media, but for the next 10 to 20 years, even that software will become more computational.”

There are a lot of generational drivers to semiconductor growth. In the past there were single product categories that went through a boom and then a bust. “The question has always been, ‘Will it continue to be cyclical or become more generational growth?'” said Devgan. “I believe, given the number of applications, that semiconductors will become less cyclical.”

He said that while the cost of design is going up, people forget to include the volume. “The volume of semiconductors has gone up exponentially, so if you normalize the cost of design, has cost really gone up? Semiconductors must deliver better value, and this is happening and that is reflected in the revenues over the past few years. There is also an increase in the amount of data that needs to be analyzed. This changes the computer storage and networking paradigm. While domain-specific computing was talked about in the ’90s, it has become really critical in the last few years. This brings us closer to the consumer and the system companies doing more silicon. The interplay between hardware and software and mechanical is driving the resurgence of system companies, driven by data. 45% of our customers are what we would consider system companies,” he said.

Fig 4. Data's growing impact. Source: Cadence.

Fig 4. Data’s growing impact. Source: Cadence.

Devgan pointed to three trends. First, system companies are building silicon. Second is the emergence of 3D-IC or chiplet-based design. And third, EDA can provide more automation by utilizing AI. He supplied some supporting information for each of these areas and then looked at various application areas and how models apply to them. He agreed with Papermaster, who said that gains were no longer coming just from scaling, and that integration was becoming a bigger thing. And he outlined the phases of computational software’s emergence in different generations of EDA.

Fig. 5: Eras of EDA software. Source: Cadence

Fig. 5: Eras of EDA software. Source: Cadence

Perhaps the most important thing that came out of this discussion was that EDA has to start addressing the whole stack, not just the silicon. It must include the system and the package. “The convergence of mechanical and electrical requires different approaches, and traditional algorithms have to be rewritten,” Devgan said. “Thermal is different. Geometries are different. Classical EDA has always been about more productivity. A combination of a physics-based approach and data-driven approach works well, but EDA has only historically focused on a single run. There has been no transfer of knowledge from one run to the next. We need a framework and mathematical approach to optimize multiple runs, and that is where the data-driven approach is useful.”

Optimization is one area where he provided an example, showing how numerical methods could be useful and provide an intelligent search of the space. He said that approach can be used to achieve better results in a shorter time than a person can do.

Devgan also addressed sustainability. “This is a big thing for our employees, for our investors, for our customers,” he said. “Semiconductors are essential, but they also consume a lot of power. There is an opportunity for us to reduce that power consumption, and power will become the driving factor in PPA — not just at the chip level, but in the data centers and at the system level. Compared to biological systems, we are orders of magnitude off.”

Steve Teig, CEO of Perceive

Steve Teig, CEO of Perceive

After more than three decades of working on machine learning applications, Steve Teig is convinced more can be done. “First, deep learning would be even stronger than it is if we depended less on folklore, and anecdotes, and spent a little more time on math and principles,” he said. “Second, I believe efficiency matters. It is not enough to make models that seem to work, we should worry about computational throughput per dollar, per watt, per other things.”

Teig observed that deep learning is impressive, and would have been considered witchcraft just 15 years ago. “But we need to recognize them for the magic tricks they are,” he said. “We keep making bigger, badder models. We have forgotten that the driver of innovation for the last 100 years has been efficiency. That is what drove Moore’s Law, the advance from CISC to RISC, and from CPUs to GPUs. On the software side we have seen advances in computer science and improved algorithms. We are now in an age of anti-efficiency when they are doing deep learning. The carbon footprint to train a big language model just once, which costs about $8M, is more than 5X the amount of carbon footprint for driving your car for life. The planet cannot afford this path.”

Fig. 6: Growing AI/ML model sizes. Source: Perceive

Fig. 6: Growing AI/ML model sizes. Source: Perceive

He also said that from a technical point of view, these gigantic models are untrustworthy because they capture noise in the training data, which is especially problematic in medical applications. “Why are they so inefficient and unreliable? The most significant reason is we are relying on folklore.”

He structured the rest of his presentation around the theme of, “A Myth, a Misunderstand and a Mistake.” The “myth” is that average accuracy is the right thing to optimize. “Some events don’t really matter, and other mistakes are more serious. The neural networks that we have do not distinguish between serious errors and non-serious. They are all scored the same. Average accuracy is almost never what people want. We need to think about penalizing errors based on their severity and not their frequency. Not all data points are equally important. So how do you correct this? The loss function must be based on severity, and the training set should rate based on the importance of data.”

The “misunderstand” is the mistaken belief that neural networks are expressive as computing devices. “Many of the assumptions and theorems are very specific and not satisfied by real-life neural networks. It is thought that a network can approximate any continuous combinational function arbitrarily closely with a feed-forward neural network. This depends on having non-polynomial activation functions. If this is true, we need an arbitrary number of bits. More concerning is that the only functions you can build in this type of neural network are combinational, meaning that anything that requires state cannot be represented. There are theorems that state that NNs are Turing-complete, but how can this be true when you have no memory? RNNs are literally finite state machines, but they have very limited memory. They are effectively regular expressions that can count, and that makes them equivalent to grep.”

The “mistake” is the belief that compression hurts accuracy and thus we should not compress our models. “You want to find structure in data, and distinguish structure from noise. And you want to do this with the smallest number of resources. Random data cannot be compressed because there is no structure. The more structure that you have, the more compression you can do. Learning in principle is compressible, which means it has structure. Information theory can help us create better networks. Occam’s Razor says the simplest model is best, but what does that mean? Any regularity or structure of the data can be used to compress that data. Better compression reduces the arbitrariness of choices that the networks make. If the model is too complicated, you are fitting noise.”

Fig. 7: Types of Compression. Source: Perceive

Fig. 7: Types of compression. Source: Perceive

What would perfect compression look like? Teig provided an interesting example. “The best compression has been described by mathematics. It is capturable by the shortest computer program that can generate the data. This is called Kolmogorov complexity. Consider Pi. I could send you the digits 31459, etc., but a program to calculate the digits of Pi enables you to generate the trillionth digit without having to send that many bits. We need to move away from trivial forms of compression, like the number of bits for weights. Is 100X compression possible? Yes.”

Giovanni De Micheli — Professor of EE and CS at EPFL

Giovanni De Micheli – Professor of EE and CS at EPFL


Giovanni De Micheli started by looking at the hierarchy of cycles in EDA, each linked to the other by some relation. “The cross-breeding of technology and design leads to superior systems,” he said.

After looking at how loops exist historically, in music, art, and mathematics, he then looked at the interactions between actors — between industry, academia, finance, start-ups, and conferences like DAC that provide data exchange. He used all of this to introduce three questions. Will silicon and CMOS be our workforce forever? Will classical computing be superseded by new paradigms? Will living matter and computers merge?

Silicon and CMOS. De Micheli looked at some of the emerging technologies, from carbon nanotubes to superconducting electronics, to logic in memory and the use of optics to speed up computation in machine learning. “Many of these are paradigm changes, but you have to look at the effort that would be needed to take these technologies and make them into products. You need new models. You need to adapt or create EDA tools and flows. In doing this, you may discover things that enable you to make existing things better.”

Research into nanowires has led to electrostatic doping, and this creates new gate topologies. He also looked at tungsten diselenide (WSe2) and showed a possible cell library in which you can very efficiently implement gates such as XOR and the Majority gate. “Let’s go back to look at logic abstraction,” he said. “We have designed digital circuits with NAND and NOR for decades. Why? Because we were brainwashed when we started. In NMOS and CMOS, that was what was the most convenient to implement. But if you look at the Majority operator, you realize it is the key operator to do addition and multiplication. Everything we do today needs those operations. You can build EDA tools based on this that actually perform better in synthesis.”

Fig. 8: Gate topologies and libraries for different fabrication technology. Source: EPFL

Fig. 8: Gate topologies and libraries for different fabrication technology. Source: EPFL

After going through all the background for how to use the Majority operator, De Micheli claimed that it could lead to a 15% to 20% delay reduction compared to previous methods. This is an example of his loop where an alternative technology teaches us something about an existing technology and helps to Improve it, as well being applicable to new technologies, such as superconducting electronics.

“EDA is the technology enabler. It provides a way in which you can evaluate the emerging technology and see what is useful in the virtual laboratory. It creates new abstractions, new methods and then new algorithms that are beneficial not only for these new emerging technologies, but also on establish technologies. And we do know that current EDA tools do not take us to the optimum circuit, and therefore it’s always interesting to find new paths to optimize circuits.”

Computing paradigm. De Micheli started looking at quantum computing and some of the applications it is useful for solving. The loop here requires adding notions of superposition and entanglement. “This is a paradigm shift and changes the way we conceive of algorithms, how we create languages, debuggers, etc.,” he said. “We have to rethink many of the notions of synthesis. Again, EDA is a technology-independent optimization that will lead to reversible logics. It is reversible because the physical processes are inherently reversible. And then the mapping to a library. You have to be able to embed constraints. Quantum EDA will enable us to design better quantum computers. Quantum computing is advancing the theory of computation which includes the polynomial class of problems that can be solved in polynomial time. That includes, for example, factoring, and this will impact security.”

Living matter and computers. “The important factor in the loop here is about doing correction and enhancement. We have been doing correction for over 1,000 years with the eyeglass. Progress has been tremendous.” De Micheli discussed many of the technologies that are available today and how they are transforming our lives.

Fig. 9: Creating feedback loops in medical applications. Source: EFPL

Fig. 9: Creating feedback loops in medical applications. Source: EFPL

“This is leading to new EDA requirements that allow us to co-design sensors and electronics,” he said. “But the ultimate challenge is understanding and mimicking the brain. This requires us being able to decode or interpret the brain signals, to copy neuromorphic systems and learning models. This creates interfacing challenges for controllability, observability, and connection.”

That’s step one. “The next level — the future — is from brain to mind, and basically being able to connect artificial and natural intelligence. Advances in biology and medicine, together with new electronic and interfacing technology, will enable us to design biomedical system to live better,” he said.

Conclusion
Executing a task in the most efficient manner involves a lot of people. It involves carefully designing the algorithms, the platforms on which those algorithms run, the tools that are used to perform the mappings between software and hardware, hardware and silicon, or an alternative fabrication technology. When each of these is done in isolation, it can lead to small improvements. But the big improvements come about when each of the actors works together in a partnership. That may be the only way to stop the continuing damage to our planet.


Wed, 27 Jul 2022 19:07:00 -0500 en-US text/html https://semiengineering.com/distilling-the-essence-of-four-dac-keynotes/
Killexams : Computer Science

A core tenet of the Computer Science program is optimization, which helps us realize how we can best Improve the solution to a problem. I had several opportunities in my courses where I could take a step back and enhance certain solutions to the problems in class or gain a new perspective by talking to professors during their office hours. The program’s structured approach to learning via experimentation and mistakes forces you to examine how you can better Improve your coding skills. Additionally, the direct line of communication to professionally experienced faculty is also key to training a better generation of developers.

Daniel Campbell '21
Computer Science Major

Thu, 25 Jun 2015 20:17:00 -0500 en text/html https://www.callutheran.edu/academics/majors/computer-science/
Killexams : Interview: Frank Cohen on FastSOA

InfoQ today publishes a one-chapter excerpt from Frank Cohen's book  "FastSOA". On this occasion, InfoQ had a chance to talk to Frank Cohen, creator of the FastSOA methodology, about the issues when trying to process XML messages, scalability, using XQuery in the middle tier, and document-object-relational-mapping.

InfoQ:
Can you briefly explain the ideas behind "FastSOA"?

Frank Cohen: For the past 5-6 years I have been investigating the impact an average Java developer's choice of technology, protocols, and patterns for building services has on the scalability and performance of the resulting application. For example, Java developers today have a choice of 21 different XML parsers! Each one has its own scalability, performance, and developer productivity profile. So a developer's choice on technology makes a big impact at runtime.

I looked at distributed systems that used message oriented middleware to make remote procedure calls. Then I looked at SOAP-based Web Services. And most recently at REST and AJAX. These experiences led me to look at SOA scalability and performance built using application server, enterprise service bus (ESB,) business process execution (BPEL,) and business integration (BI) tools. Across all of these technologies I found a consistent theme: At the intersection of XML and SOA are significant scalability and performance problems.

FastSOA is a test methodology and set of architectural patterns to find and solve scalability and performance problems. The patterns teach Java developers that there are native XML technologies, such as XQuery and native XML persistence engines, that should be considered in addition to Java-only solutions.

InfoQ: What's "Fast" about it? ;-)

FC: First off, let me describe the extent of the problem. Java developers building Web enabled software today have a lot of choices. We've all heard about Service Oriented Architecture (SOA), Web Services, REST, and AJAX techniques. While there are a LOT of different and competing definitions for these, most Java developers I speak to expect that they will be working with objects that message to other objects - locally or on some remote server - using encoded data, and often the encoded data is in XML format.

The nature of these interconnected services we're building means our software needs to handle messages that can be small to large and simple to complex. Consider the performance penalty of using a SOAP interface and streams XML parser (StAX) to handle a simple message schema where the message size grows. A modern and expensive multi-processor server that easily serves 40 to 80 Web pages per second serves as little as 1.5 to 2 XML requests per second.

Scalability Index

Without some sort of remediation Java software often slows to a crawl when handling XML data because of a mismatch between the XML schema and the XML parser. For instance, we checked one SOAP stack that instantiated 14,385 Java objects to handle a request message of 7000 bytes that contains 200 XML elements.

Of course, titling my work SlowSOA didn't sound as good. FastSOA offers a way to solve many of the scalability and performance problems. FastSOA uses native XML technology to provide service acceleration, transformation, and federation services in the mid-tier. For instance, an XQuery engine provides a SOAP interface for a service to handle decoding the request, transform the request data into something more useful, and routes the request to a Java object or another service.

InfoQ: One alternative to XML databinding in Java is the use of XML technologies, such as XPath or XQuery. Why muddy the water with XQuery? Why not just use Java technology?

FC:We're all after the same basic goals:

  1. Good scalability and performance in SOA and XML environments.
  2. Rapid development of software code.
  3. Flexible and easy maintenance of software code as the environment and needs change.

In SOA, Web Service, and XML domains I find the usual Java choices don't get me to all three goals.

Chris Richardson explains the Domain Model Pattern in his book POJOs in Action. The Domain Model is a popular pattern to build Web applications and is being used by many developers to build SOA composite applications and data services.

Platform

The Domain Model divides into three portions: A presentation tier, an application tier, and a data tier. The presentation tier uses a Web browser with AJAX and RSS capabilities to create a rich user interface. The browser makes a combination of HTML and XML requests to the application tier. Also at the presentation tier is a SOAP-based Web Service interface to allow a customer system to access functions directly, such as a parts ordering function for a manufacturer's service.

At the application tier, an Enterprise Java Bean (EJB) or plain-old Java object (Pojo) implements the business logic to respond to the request. The EJB uses a model, view, controller (MVC) framework - for instance, Spring MVC, Struts or Tapestry - to respond to the request by generating a response Web page. The MVC framework uses an object/relational (O/R) mapping framework - for instance Hibernate or Spring - to store and retrieve data in a relational database.

I see problem areas that cause scalability and performance problems when using the Domain Model in XML environments:

  • XML-Java Mapping requires increasingly more processor time as XML message size and complexity grows.
  • Each request operates the entire service. For instance, many times the user will check order status sooner than any status change is realistic. If the system kept track of the most recent response's time-to-live duration then it would not have to operate all of the service to determine the most previously cached response.
  • The vendor application requires the request message to be in XML form. The data the EJB previously processed from XML into Java objects now needs to be transformed back into XML elements as part of the request message. Many Java to XML frameworks - for instance, JAXB, XMLBeans, and Xerces ? require processor intensive transformations. Also, I find these frameworks challenging me to write difficult and needlessly complex code to perform the transformation.
  • The service persists order information in a relational database using an object-relational mapping framework. The framework transforms Java objects into relational rowsets and performs joins among multiple tables. As object complexity and size grows my research shows many developers need to debug the O/R mapping to Improve speed and performance.

In no way am I advocating a move away from your existing Java tools and systems. There is a lot we can do to resolve these problems without throwing anything out. For instance, we could introduce a mid-tier service cache using XQuery and a native XML database to mitigate and accelerate many of the XML domain specific requests.

Architecture

The advantage to using the FastSOA architecture as a mid-tier service cache is in its ability to store any general type of data, and its strength in quickly matching services with sets of complex parameters to efficiently determine when a service request can be serviced from the cache. The FastSOA mid-tier service cache architectures accomplishes this by maintaining two databases:

  • Service Database. Holds the cached message payloads. For instance, the service database holds a SOAP message in XML form, an HTML Web page, text from a short message, and binary from a JPEG or GIF image.
  • Policy Database. Holds units of business logic that look into the service database contents and make decisions on servicing requests with data from the service database or passing through the request to the application tier. For instance, a policy that receives a SOAP request validates security information in the SOAP header to validate that a user may receive previously cached response data. In another instance a policy checks the time-to-live value from a stock market price quote to see if it can respond to a request from the stock value stored in the service database.

FastSOA uses the XQuery data model to implement policies. The XQuery data model supports any general type of document and any general dynamic parameter used to fetch and construct the document. Used to implement policies the XQuery engine allows FastSOA to efficiently assess common criteria of the data in the service cache and the flexibility of XQuery allows for user-driven fuzzy pattern matches to efficiently represent the cache.

FastSOA uses native XML database technology for the service and policy databases for performance and scalability reasons. Relational database technology delivers satisfactory performance to persist policy and service data in a mid-tier cache provided the XML message schemas being stored are consistent and the message sizes are small.

InfoQ: What kinds of performance advantages does this deliver?

FC: I implemented a scalability test to contrast native XML technology and Java technology to implement a service that receives SOAP requests.

TPS for Service Interface

The test varies the size of the request message among three levels: 68 K, 202 K, 403 K bytes. The test measures the roundtrip time to respond to the request at the consumer. The test results are from a server with dual CPU Intel Xeon 3.0 Ghz processors running on a gigabit switched Ethernet network. I implemented the code in two ways:

  • FastSOA technique. Uses native XML technology to provide a SOAP service interface. I used a commercial XQuery engine to expose a socket interface that receives the SOAP message, parses its content, and assembles a response SOAP message.
  • Java technique. Uses the SOAP binding proxy interface generator from a popular commercial Java application server. A simple Java object receives the SOAP request from the binding, parses its content using JAXB created bindings, and assembles a response SOAP message using the binding.

The results show a 2 to 2.5 times performance improvement when using the FastSOA technique to expose service interfaces. The FastSOA method is faster because it avoids many of the mappings and transformations that are performed in the Java binding approach to work with XML data. The greater the complexity and size of the XML data the greater will be the performance improvement.

InfoQ: Won't these problems get easier with newer Java tools?

FC: I remember hearing Tim Bray, co-inventor of XML, extolling a large group of software developers in 2005 to go out and write whatever XML formats they needed for their applications. Look at all of the different REST and AJAX related schemas that exist today. They are all different and many of them are moving targets over time. Consequently, when working with Java and XML the average application or service needs to contend with three facts of life:

  1. There's no gatekeeper to the XML schemas. So a message in any schema can arrive at your object at any time.
  2. The messages may be of any size. For instance, some messages will be very short (less than 200 bytes) while some messages may be giant (greater than 10 Mbytes.)
  3. The messages use simple to complex schemas. For instance, the message schema may have very few levels of hierarchy (less than 5 children for each element) while other messages will have multiple levels of hierarchy (greater than 30 children.)

What's needed is an easy way to consume any size and complexity of XML data and to easily maintain it over time as the XML changes. This kind of changing landscape is what XQuery was created to address.

InfoQ: Is FastSOA only about improving service interface performance?

FC: FastSOA addresses these problems:

  • Solves SOAP binding performance problems by reducing the need for Java objects and increasing the use of native XML environments to provide SOAP bindings.
  • Introduces a mid-tier service cache to provide SOA service acceleration, transformation, and federation.
  • Uses native XML persistence to solve XML, object, and relational incompatibility.

FastSOA Pattern

FastSOA is an architecture that provides a mid-tier service binding, XQuery processor, and native XML database. The binding is a native and streams-based XML data processor. The XQuery processor is the genuine mid-tier that parses incoming documents, determines the transaction, communicates with the ?local? service to obtain the stored data, serializes the data to XML and stores the data into a cache while recording a time-to-live duration. While this is an XML oriented design XQuery and native XML databases handle non-XML data, including images, binary files, and attachments. An equally important benefit to the XQuery processor is the ability to define policies that operate on the data at runtime in the mid-tier.

Transformation

FastSOA provides mid-tier transformation between a consumer that requires one schema and a service that only provides responses using a different and incompatible schema. The XQuery in the FastSOA tier transforms the requests and responses between incompatible schema types.

Federation

Lastly, when a service commonly needs to aggregate the responses from multiple services into one response, FastSOA provides service federation. For instance, many content publishers such as the New York Times provide new articles using the Rich Site Syndication (RSS) protocol. FastSOA may federate news analysis articles published on a Web site with late breaking news stories from several RSS feeds. This can be done in your application but is better done in FastSOA because the content (news stores and RSS feeds) usually include time-to-live values that are ideal for FastSOA's mid-tier caching.

InfoQ: Can you elaborate on the problems you see in combining XML with objects and relational databases?

FC: While I recommend using a native XML database for XML persistence it is possible to be successful using a relational database. Careful attention to the quality and nature of your application's XML is needed. For instance, XML is already widely used to express documents, document formats, interoperability standards, and service orchestrations. There are even arguments put forward in the software development community to represent service governance in XML form and operated upon with XQuery methods. In a world full of XML, we software developers have to ask if it makes sense to use relational persistence engines for XML data. Consider these common questions:

  • How difficult is it to get XML data into a relational database?
  • How difficult is it to get relational data to a service or object that needs XML data? Can my database retrieve the XML data with lossless fidelity to the original XML data? Will my database deliver acceptable performance and scalability for operations on XML data stored in the database? Which database operations (queries, changes, complex joins) are most costs in terms of performance and required resources (cpus, network, memory, storage)?

Your answers to these questions forms a criteria by which it will make sense to use a relational database, or perhaps not. The alternative to relational engines are native XML persistence engines such as eXist, Mark Logic, IBM DB2 V9, TigerLogic, and others.

InfoQ: What are the core ideas behind the PushToTest methodology, and what is its relation to SOA?

FC: It frequently surprises me how few enterprises, institutions, and organizations have a method to test services for scalability and performance. One fortune 50 company asked a summer intern they wound up hiring to run a few performance tests when he had time between other assignments to check and identify scalability problems in their SOA application. That was their entire approach to scalability and performance testing.

The business value of running scalability and performance tests comes once a business formalizes a test method that includes the following:

  1. Choose the right set of test cases. For instance, the test of a multiple-interface and high volume service will be different than a service that handles periodic requests with huge message sizes. The test needs to be oriented to address the end-user goals in using the service and deliver actionable knowledge.
  2. Accurate test runs. Understanding the scalability and performance of a service requires dozens to hundreds of test case runs. Ad-hoc recording of test results is unsatisfactory. Test automation tools are plentiful and often free.
  3. Make the right conclusions when analyzing the results. Understanding the scalability and performance of a service requires understanding how the throughput measured as Transactions Per Second (TPS) at the service consumer changes with increased message size and complexity and increased concurrent requests.

All of this requires much more than an ad-hoc approach to reach useful and actionable knowledge. So I built and published the PushToTest SOA test methodology to help software architects, developers, and testers. The method is described on the PushToTest.com Web site and I maintain an open-source test automation tool called PushToTest TestMaker to automate and operate SOA tests.

PushToTest provides Global Services to its customers to use our method and tools to deliver SOA scalability knowledge. Often we are successful convincing an enterprise or vendor that contracts with PushToTest for primary research to let us publish the research under an open source license. For example, the SOA Performance kit comes with the encoding style, XML parser, and use cases. The kit is available for free download at: http://www.pushtotest.com/Downloads/kits/soakit.html and older kits are at http://www.pushtotest.com/Downloads/kits.

InfoQ: Thanks a lot for your time.


Frank Cohen is the leading authority for testing and optimizing software developed with Service Oriented Architecture (SOA) and Web Service designs. Frank is CEO and Founder of PushToTest and inventor of TestMaker, the open-source SOA test automation tool, that helps software developers, QA technicians and IT managers understand and optimize the scalability, performance, and reliability of their systems. Frank is author of several books on optimizing information systems (Java Testing and Design from Prentice Hall in 2004 and FastSOA from Morgan Kaufmann in 2006.) For the past 25 years he led some of the software industry's most successful products, including Norton Utilities for the Macintosh, Stacker, and SoftWindows. He began by writing operating systems for microcomputers, helping establish video games as an industry, helping establish the Norton Utilities franchise, leading Apple's efforts into middleware and Internet technologies, and was principal architect for the Sun Community Server. He cofounded Inclusion.net (OTC: IINC), and TuneUp.com (now Symantec Web Services.) Contact Frank at fcohen@pushtotest.com and http://www.pushtotest.com.

Sun, 05 Jun 2022 15:49:00 -0500 en text/html https://www.infoq.com/articles/fastsoa-cohen/
Killexams : Best B2B Marketing Companies in Austin, TX 2021

Envision Creative is a branding solutions company that employs a team of expert marketers skilled in social media management, content creation, search engine optimization, SEM and web design. 

The firm believes corporate success relies heavily on proper lead generation and maximum brand recognition. This way, their clients can research, create and secure long-term and profitable connections with customers. 

Website design, marketing strategy, advertising, PPC, branding and design and website development are the services they offer. Envision Creative has worked with Hilton Hotels, Dell, Facebook, University of Texas and Malana Hotels.  

Directive Consulting has been serving the SEO and PPC needs of local business in Austin for over 6 years. Since then, the B2B marketing company has helped numerous companies speed up ROI and boost branding across multiple channels. 

The firm employs a team of SEO specialists, data analysts, web designers and developers and account managers geared towards improving corporate performance using the latest technologies and platforms available. They implement a team-based approach to cover different industries simultaneously. 

They have worked on projects sanctioned by Allstate, Pelican Products, Samsung and Cisco. Directive Consulting has also collaborated with Xactly, Ultimate Ears and Tencent.  

HMG Creative is a digital agency that works on marketing campaigns using the latest strategies and technologies with the goal of boosting branding and market awareness. Their creative services are divided into four areas which are design, strategy, marketing and development. 

They combine their technical expertise in using multiple platforms with their experience in different industries to ensure campaign success. Innovation and up-to-date market trends are some of the things they are well known for. 

Their most popular services are digital branding, web design, website development and marketing strategy. The firm has partnered with numerous big brands such as Humco, The Marriott, OriGen Biomedical, OnStream Media, Terminix and The Art Institute  

Mighty Citizen is a mission-driven digital communication and marketing company that has been serving schools, government agencies and non-profit organizations in creating high-impacting campaigns that Improve audience connection. 

They develop humanized designs in building websites, crafting brands and launching marketing campaigns. Their expertise lie in web design and development alongside digital marketing campaigns like social media management. 

Their most notable clients are the University of Texas, Texas Society of Association Executives, Disability Rights Texas, Texas Health and Human Services, Meals on Wheels of Central Texas and Austin Community College.  

Somnio is a digital marketing company focusing on creating B2B solutions to connect companies with their target audience. This enables their clients to boost conversion while lowering down costs. 

Developing engaging and human-centered platforms are their means of helping companies achieve higher revenue and acquire long-term clients. Their approach to digital marketing is intelligence-driven where they use the automation features to reduce issues caused by human error. 

Content creation, video editing, digital storytelling, branding and custom experience are a few of their most popular services. Their clients usually belong to the UT, business services and medical sector, with AT&T, IBM, Oracle, Cisco and CA Technologies being some of the biggest ones.  

Thu, 17 Sep 2020 10:05:00 -0500 en text/html https://www.ibtimes.com/spotlight/best-b2b-marketing-companies-in-austin-tx
P2020-007 exam dump and training guide direct download
Training Exams List