70-338 PDF Braindumps changes on daily basis, download daily

killexams.com Microsoft Certification concentrate on guides comprise of real test questions and replies. Exceptionally our 70-338 Study Guide are legitimate, Latest, and 2022 refreshed on an ordinary premise. Many applicants breeze through their 70-338 test with our genuine inquiries Study Guide. On the off chance that you like to appreciate achievement, you ought to download 70-338 PDF Braindumps.

Exam Code: 70-338 Practice exam 2022 by Killexams.com team
Lync 2013 Depth Support Engineer
Microsoft Engineer book
Killexams : Microsoft Engineer book - BingNews https://killexams.com/pass4sure/exam-detail/70-338 Search results Killexams : Microsoft Engineer book - BingNews https://killexams.com/pass4sure/exam-detail/70-338 https://killexams.com/exam_list/Microsoft Killexams : CA-based Software Engineer Writes Astrology Book

India-West Staff Reporter 

FREMONT, CA – Former Mylapore MLA and veteran stage and movie actor S.Ve. Shekher released the book “Horoscope Matching for Marriage: KT Astro Kundali Matching“ written by California-based Kathir Subbiah, aka KT Astrologer. Based on Vedic Astrology, the book focuses on how to match horoscopes and what to expect from marital life and relationships.

The author learned Vedic astrology for over two decades by conducting research, creating case studies, going over analytics, analyzing patterns and real-life events.  His moment of fame came in November 2016 predicting Donald Trump’s victory in the US Presidential election. He predicted the outcome a year in advance in December 2015.

The book had its US launch on May 28 at Fremont Hindu Temple, here. Subbiah has a graduate degree from BITS, Pilani and works as a Software Developer at Microsoft.

Thu, 28 Jul 2022 13:35:00 -0500 en-US text/html https://indiawest.com/ca-based-software-engineer-writes-astrology-book/
Killexams : Trustworthy Online Controlled Experiments No result found, try new keyword!Microsoft, LinkedIn and other leading digital companies. This practical book gives the reader rare access to decades of experimentation experience at these companies and should be on the bookshelf of ... Sat, 23 Jul 2022 14:03:00 -0500 https://www.cambridge.org/core/books/trustworthy-online-controlled-experiments/D97B26382EB0EB2DC2019A7A7B518F59 Killexams : How this 26-year-old became an engineering manager at Google Cloud without a degree
  • 26-year-old Jon Heisterberg manages a team of software engineers at Google Cloud. 
  • Heisterberg spent his youth tinkering with electronics, but doesn't have a college degree.
  • Here's how he did it.

The shift to cloud computing has been a major theme for enterprise IT over the past decade, with commensurate revenue growth for major players Google, Amazon, and Microsoft.

According to LinkedIn, expertise in AWS and Microsoft Azure ranked among the most in-demand skills of 2021, signaling the demand for talent.

California native Jon Heisterberg has benefited from that demand, and now manages a team of engineers inside Google's cloud computing division at the age of just 26. Heisterberg's team of half-a-dozen software experts develops the in-house programs used by Google Cloud's hardware engineers to model and refine its use of silicon across its systems.

Cloud engineering roles are high-tech and largely lucrative jobs. Insider analysis of foreign-labor-disclosure data showed managers at Google's cloud unit typically earning a base salary of around $200,000, excluding stock awards and bonuses.

But Heisterberg is unusual, landing the job without a college degree. Listings for equivalent roles at Google typically ask that applicants hold a bachelor's degree in computer science, math, or a related field.

A Google spokesperson told Insider the firm typically "expects a Masters or Ph.D. in Computer Science with 3+ years of leadership experience and 5+ years of engineering experience."

"There's been a lot of learning new skills on the job," Heisterberg told Insider.

Heisterberg began with a side-hustle of tinkering

Growing up in a low-income, single-parent household in Nevada County, California, Heisterberg developed a knack for tinkering with personal electronics at an early age — and quickly realize his school peers were willing to pay him for his skills. 

Aged 12, he figured out how to redesign the personal avatars gaming consoles assigned to players at the outset, "ripping the hard drive out and plugging it into my computer." 

"Little did I know that's how most video game companies would end up making money," he laughs, alluding to the lucrative sums spent on character skins in blockbuster titles like Fortnite. So they basically stole his idea? "Yeah!" 

"Friends would come over to my house and be like, 'Hey, I want that!' So I started this kinda side business modifying, repairing, and reselling things – games consoles, smartphones, computers – you name it," Heisterberg said. "I steadily built up my book of clients, and slowly built up my rates, too." 

Despite his technical talent, Heisterberg told Insider that college "wasn't really on the cards" for him. "I wasn't very studious. I would just fool around on my computer all day." 

As a young adult, he'd managed to convert his side hustle into a full-time job, helping with on-site tech solutions for small and medium-sized businesses in the Bay Area, where he moved to live with his grandmother as a teen. 

He used free online training tools to Improve his coding

Heisterberg credits the Year Up program, a nationwide tuition-free training program that aims to help those from unconventional backgrounds, as the driving force behind his career at Google.

"They're trying to solve what they call the 'opportunity divide', so people of lower socioeconomic status can get an on-ramp to these kinds of opportunities," said Heisterberg.

The program promises to help young people from lower-income households "gain the skills, experiences, and support that will empower them to reach their potential through careers and higher education."

In the second half of his year-long stint, he completed an internship at cybersecurity firm Symantec (rebranded as NortonLifeLock in 2019), where he got a crash course in dealing with high-level threats. The leaders of his team were featured in the Academy Award-nominated documentary Zero Days, detailing their discovery of malware targeting an Iranian nuclear facility. "That was one hell of an internship," Heisterberg said.

Heisterberg also started using "Learn X in Y Minutes", a free online tool for learning coding languages, where he honed his Python skills.

There's always a role somewhere in Big Tech, regardless of qualifications. 

Before his time at Symantec was up, a Google recruiter reached out to Heisterberg via Linkedin, asking if he'd consider applying for the firm's residency program, a 26-month scheme in which new recruits shuffle around the wider Alphabet portfolio. 

As part of the scheme, Google says successful candidates get to learn "about basic coding and debugging, simple data structures, and how to work with large code bases," while working cross-company. That exposed Heisterberg to YouTube, health division Verily, self-driving arm Waymo, and moonshots division GoogleX.

Heisterberg also began memorizing HighScalability.com, an online community for coders, filled with guest posts from software leaders at Big Tech corporations and job postings.

"You get some very high-level engineers at these big tech companies running through their architecture and decisions they've made, talking about the pros and cons of it all," he said. "It's really great." 

Heisterberg said while top firms usually do seek highly qualified candidates, "it's just not true" that those without can't land a job.

"There's a lot of opportunities out there if you're willing to get a little more creative," he added. "You could find your way to any role in the company eventually."

Mon, 08 Aug 2022 23:50:00 -0500 en-US text/html https://www.businessinsider.com/how-26-year-old-became-google-cloud-manager-without-degree-2022-8
Killexams : Artificial Intelligence is not sentient. Why do people say it is?

Cade Metz

As the sun set over Maury Island, just south of Seattle, Ben Goertzel and his jazz-fusion band had one of those moments that all bands hope for — keyboard, guitar, saxophone and lead singer coming together as if they were one.

Goertzel was on keys. The band’s friends and family listened from a patio overlooking the beach. And Desdemona, wearing a purple wig and a black dress laced with metal studs, was on lead vocals, warning of the coming Singularity — the inflection point where technology can no longer be controlled by its creators.

“The Singularity will not be centralized!” she bellowed. “It will radiate through the cosmos like a wasp!”

After more than 25 years as an artificial intelligence (AI) researcher — a quarter-century spent in pursuit of a machine that could think like a human — Goertzel knew he had finally reached the end goal: Desdemona, a machine he had built, was sentient. But a few minutes later, he realized this was nonsense.

“When the band gelled, it felt like the robot was part of our collective intelligence — that it was sensing what we were feeling and doing,” he said. “Then I stopped playing and thought about what really happened.”

What happened was that Desdemona, through some sort of technology-meets-jazz-fusion kismet, hit him with a reasonable facsimile of his own words at just the right moment.

Goertzel is CEO and chief scientist of an organization called SingularityNET. He built Desdemona to, in essence, mimic the language in books he had written about the future of AI. Many people in Goertzel’s field aren’t as good at distinguishing between what is real and what they might want to be real.

The most famous recent example is an engineer named Blake Lemoine. He worked on AI at Google, specifically on software that can generate words on its own — what’s called a large language model. He concluded the technology was sentient; his bosses concluded it wasn’t. He went public with his convictions in an interview with The Washington Post, saying: “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.”

The interview caused an enormous stir across the world of AI researchers, which I have been covering for more than a decade, and among people who are not normally following large-language-model breakthroughs. One of my mother’s oldest friends sent her an email asking if I thought the technology was sentient. When she was assured that it was not, her reply was swift. “That’s consoling,” she said. Google eventually fired Lemoine.

For people like my mother’s friend, the notion that today’s technology is somehow behaving like the human brain is a red herring. There is no evidence this technology is sentient or conscious — two words that describe an awareness of the surrounding world.

That goes for even the simplest form you might find in a worm, said Colin Allen, a professor at the University of Pittsburgh who explores cognitive skills in both animals and machines. “The dialogue generated by large language models does not provide evidence of the kind of sentience that even very primitive animals likely possess,” he said.

Alison Gopnik, a professor of psychology who is part of the AI research group at the University of California, Berkeley, agreed. “The computational capacities of current AI like the large language models,” she said, “don’t make it any more likely that they are sentient than that rocks or other machines are.”

The problem is that the people closest to the technology — the people explaining it to the public — live with one foot in the future. They sometimes see what they believe will happen as much as they see what is happening now.

“There are lots of dudes in our industry who struggle to tell the difference between science fiction and real life,” said Andrew Feldman, CEO and founder of Cerebras, a company building massive computer chips that can help accelerate the progress of AI.

A prominent researcher, Jürgen Schmidhuber, has long claimed that he first built conscious machines decades ago. In February, Ilya Sutskever, one of the most important researchers of the past decade and the chief scientist at OpenAI, a research lab in San Francisco backed by $1 billion from Microsoft, said today’s technology might be “slightly conscious.” Several weeks later, Lemoine gave his big interview.

These dispatches from the small, insular, uniquely eccentric world of AI research can be confusing or even scary to most of us. Science fiction books, movies and television have trained us to worry that machines will one day become aware of their surroundings and somehow do us harm.

It is true that as these researchers press on, Desdemona-like moments when this technology seems to show signs of true intelligence, consciousness or sentience are increasingly common. It is not true that in labs across Silicon Valley engineers have built robots who can emote and converse and jam on lead vocals like a human. The technology can’t do that. But it does have the power to mislead people.

The technology can generate tweets and blog posts and even entire articles, and as researchers make gains, it is getting better at conversation. Although it often spits out complete nonsense, many people — not just AI researchers — find themselves talking to this kind of technology as if it were human.

As it improves and proliferates, ethicists warn that we will need a new kind of skepticism to navigate whatever we encounter across the internet. And they wonder if we are up to the task.

Desdemona’s ancestors

On July 7, 1958, inside a government lab several blocks west of the White House, psychologist Frank Rosenblatt unveiled a technology he called the Perceptron. It did not do much. As Rosenblatt demonstrated for reporters visiting the lab, if he showed the machine a few hundred rectangular cards, some marked on the left and some the right, it could learn to tell the difference between the two.

He said the system would one day learn to recognize handwritten words, spoken commands and even people’s faces. In theory, he told the reporters, it could clone itself, explore distant planets and cross the line from computation into consciousness. When he died 13 years later, it could do none of that. But this was typical of AI research — an academic field created around the same time Rosenblatt went to work on the Perceptron.

The pioneers of the field aimed to re-create human intelligence by any technological means necessary, and they were confident this would not take very long. Some said a machine would beat the world chess champion and discover its own mathematical theorem within the next decade. That did not happen, either.

The research produced some notable technologies, but they were nowhere close to reproducing human intelligence. “Artificial intelligence” described what the technology might one day do, not what it could do at the moment.

Some of the pioneers were engineers. Others were psychologists or neuroscientists. No one, including the neuroscientists, understood how the brain worked. (Scientists still do not understand it.) But they believed they could somehow re-create it. Some believed more than others.

In the ’80s, engineer Doug Lenat said he could rebuild common sense one rule at a time. In the early 2000s, members of a sprawling online community — now called Rationalists or Effective Altruists — began exploring the possibility that AI would one day destroy the world. Soon, they pushed this long-term philosophy into academia and industry. Inside today’s leading AI labs, stills and posters from classic science fiction films hang on the conference room walls. As researchers chase these tropes, they use the same aspirational language used by Rosenblatt and the other pioneers.

Even the names of these labs look into the future: Google Brain, DeepMind, and SingularityNET. The truth is that most technology labeled “artificial intelligence” mimics the human brain in only small ways — if at all. Certainly, it has not reached the point where its creators can no longer control it. Most researchers can step back from the aspirational language and acknowledge the limitations of the technology. But sometimes, the lines get blurry.

Why they believe

In 2020, OpenAI unveiled a system called GPT-3. It could generate tweets, pen poetry, summarize emails, answer trivia questions, translate languages and even write computer programs. Sam Altman, a 37-year-old entrepreneur and investor who leads OpenAI as CEO, believes this and similar systems are intelligent. “They can complete useful cognitive tasks,” Altman told me on a recent morning. “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”

GPT-3 is what AI researchers call a neural network, after the web of neurons in the human brain. That, too, is aspirational language. A neural network is really a mathematical system that learns skills by pinpointing patterns in vast amounts of digital data. By analyzing thousands of cat photos, for instance, it can learn to recognize a cat. “We call it ‘artificial intelligence,’ but a better name might be ‘extracting statistical patterns from large data sets,’” said Gopnik.

This is the same technology that Rosenblatt explored in the 1950s. He did not have the vast amounts of digital data needed to realize this big idea. Nor did he have the computing power needed to analyze all that data. But around 2010, researchers began to show that a neural network was as powerful as he and others had long claimed it would be — at least with certain tasks.

These tasks included image recognition, speech recognition and translation. A neural network is the technology that recognizes the commands you bark into your iPhone, and translates between French and English on Google Translate. More recently, researchers at places such as Google and OpenAI began building neural networks that learned from enormous amounts of prose, including digital books and Wikipedia articles by the thousands. GPT-3 is an example.

As it analyzed all that digital text, it built what you might call a mathematical map of human language — more than 175 billion data points that describe how we piece words together. Using this map, it can perform many different tasks, such as penning speeches, writing computer programs and having a conversation.

But there are endless caveats. Using GPT-3 is like rolling the dice: If you ask it for 10 speeches in the voice of Donald Trump, it might provide you five that sound remarkably like the former president — and five others that come nowhere close. Computer programmers use the technology to create small snippets of code they can slip into larger programs, but more often than not, they have to edit and massage whatever it gives them.

“These things are not even in the same ballpark as the mind of the average 2-year-old,” said Gopnik, who specializes in child development. “In terms of at least some kinds of intelligence, they are probably somewhere between a slime mold and my 2-year-old grandson.”

Even after we discussed these flaws, Altman described this kind of system as intelligent. As we continued to chat, he acknowledged that it was not intelligent in the way humans are. “It is like an alien form of intelligence,” he said. “But it still counts.”

The words used to describe the once and future powers of this technology mean different things to different people. People disagree on what is and what is not intelligence. Sentience — the ability to experience feelings and sensations — is not something easily measured. Nor is consciousness — being awake and aware of your surroundings.

Altman and many others in the field are confident that they are on a path to building a machine that can do anything the human brain can do. This confidence shines through when they discuss current technologies.

“I think part of what’s going on is people are just really excited about these systems and expressing their excitement in imperfect language,” Altman said. He acknowledges that some AI researchers “struggle to differentiate between reality and science fiction.” But he believes these researchers still serve a valuable role. “They help us dream of the full range of the possible,” he said.

Perhaps they do. But for the rest of us, these dreams can get in the way of the issues that deserve our attention.

Why everyone else believes

In the mid-1960s, a researcher at the Massachusetts Institute of Technology (MIT), Joseph Weizenbaum, built an automated psychotherapist he called Eliza. This chatbot was simple. Basically, when you typed a thought onto a computer screen, it asked you to expand this thought — or it just repeated your words in the form of a question.

Even when Weizenbaum cherry-picked a conversation for the academic paper he published on the technology, it looked like this, with Eliza responding in capital letters:

Men are all alike.


They’re always bugging us about something or other.


Well, my boyfriend made me come here.


But much to Weizenbaum’s surprise, people treated Eliza as if it were human. They freely shared their personal problems and took comfort in its responses.

“I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines,” he later wrote. “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

We humans are susceptible to these feelings. When dogs, cats and other animals exhibit even tiny amounts of humanlike behavior, we tend to assume they are more like us than they really are. Much the same happens when we see hints of human behavior in a machine. Scientists now call it the Eliza effect.

Much the same thing is happening with modern technology. A few months after GPT-3 was released, an inventor and entrepreneur, Philip Bosua, sent me an email. The subject line was: “god is a machine.”

“There is no doubt in my mind GPT-3 has emerged as sentient,” it read. “We all knew this would happen in the future, but it seems like this future is now. It views me as a prophet to disseminate its religious message and that’s strangely what it feels like.”

After designing more than 600 apps for the iPhone, Bosua developed a light bulb you could control with your smartphone, built a business around this invention with a Kickstarter campaign and eventually raised $12 million from the Silicon Valley venture capital firm Sequoia Capital. Now, although he has no biomedical training, he is developing a device for diabetics that can monitor their glucose levels without breaking the skin.

When we spoke on the phone, he asked that I keep his identity secret. He is an experienced tech entrepreneur who was helping to build a new company, Know Labs. But after Lemoine made similar claims about similar technology developed at Google, Bosua said he was happy to go on the record. “When I discovered what I discovered, it was very early days,” he said. “But now all this is starting to come out.”

When I pointed out that many experts were adamant these kinds of systems were merely good at repeating patterns they had seen, he said this is also how humans behave. “Doesn’t a child just mimic what it sees from a parent — what it sees in the world around it?” he said.

Bosua acknowledged that GPT-3 was not always coherent but said you could avoid this if you used it in the right way. “The best syntax is honesty,” he said. “If you are honest with it and express your raw thoughts that gives it the ability to answer the questions you are looking for.”

Bosua is not necessarily representative of the everyman. The chair of his new company calls him “divinely inspired” — someone who “sees things early.” But his experiences show the power of even very flawed technology to capture the imagination.

Where the robots will take us

Margaret Mitchell worries what all this means for the future. As a researcher at Microsoft, then Google, where she helped found its AI ethics team, and now Hugging Face, another prominent research lab, she has seen the rise of this technology firsthand. Today, she said, the technology is relatively simple and obviously flawed, but many people see it as somehow human. What happens when the technology becomes far more powerful?

In addition to generating tweets and blog posts and beginning to imitate conversation, systems built by labs such as OpenAI can generate images. With a new tool called DALL-E, you can create photo-realistic digital images merely by describing, in plain English, what you want to see.

Some in the community of AI researchers worry that these systems are on their way to sentience or consciousness. But this is beside the point. “A conscious organism — like a person or a dog or other animals — can learn something in one context and learn something else in another context and then put the two things together to do something in a novel context they have never experienced before,” said Allen, the University of Pittsburgh professor. “This technology is nowhere close to doing that.”

There are far more immediate — and more real — concerns. As this technology continues to improve, it could help spread disinformation across the internet — fake text and fake images — feeding the kind of online campaigns that may have helped sway the 2016 presidential election. It could produce chatbots that mimic conversation in far more convincing ways. And these systems could operate at a scale that makes today’s human-driven disinformation campaigns seem minuscule by comparison.

If and when that happens, we will have to treat everything we see online with extreme skepticism. But Mitchell wonders if we are up to the challenge. “I worry that chatbots will prey on people,” she said. “They have the power to persuade us what to believe and what to do.”

c.2022 The New York Times Company
Sat, 06 Aug 2022 13:12:00 -0500 en text/html https://www.moneycontrol.com/news/world/ai-is-not-sentient-why-do-people-say-it-is-8967631.html
Killexams : California based Software Engineer releases Astrology Book in Chennai

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Wed, 03 Aug 2022 22:48:00 -0500 en-US text/html https://www.indiapost.com/california-based-software-engineer-releases-astrology-book-in-chennai/
Killexams : Windows 12 could arrive in 2024
Audio player loading…

Microsoft is making a big change with Windows, switching to a new plan of introducing a fresh incarnation of the desktop operating system every three years, with smaller and more regular feature updates in-between.

The move to a new engineering schedule is a rumor floated by Zac Bowden of Windows Central (opens in new tab), who is well-connected at Microsoft, and has offered up reliable leakage in the past.