Forget Failing MS-203 exam with these bootcamp and Study Guide

With, we all give completely legitimate Microsoft Microsoft 365 Messaging PDF Download that are lately necessary for Transferring MS-203 test. We really individuals to enhance their MS-203 knowledge in order to memorize the Practice Test plus be sure completely success within the particular exam. It is definitely the best choice to accelerate your own position as a good expert in the particular Industry with MS-203 certification.

Exam Code: MS-203 Practice exam 2022 by team
Microsoft 365 Messaging
Microsoft Microsoft techniques
Killexams : Microsoft Microsoft techniques - BingNews Search results Killexams : Microsoft Microsoft techniques - BingNews Killexams : Windows 10 users to get THIS exciting new Windows 11 feature; Check what’s coming

Your Windows 10 computer will get new Windows 11 features. Know what Microsoft has planned for you.

Are you still using Windows 10 instead of the latest Windows 11? Then you will be glad to know that there is one new feature that is set to migrate from the latest Windows 11 to your computers! Windows has decided to bring Windows 11's new printing capabilities to its older operating system, Windows 10. Earlier, Microsoft had said that a scoped set of features will be delivered to users who want to stay with Windows 10, and this new feature can be considered a step in this new direction, a report from Windows Latest mentioned.

What's new for Windows 10 users

A PIN can now be added to a print job with the new Windows 10 printing interface. A print job that has a PIN attached to it won't publish until the PIN is entered on the printer. This is one of the new techniques for preventing double connections and incorrect prints. According to Microsoft, PIN integration could reduce paper and toner waste. Users will also benefit from greater security and privacy at the same time, especially in settings with several printers.

Also read: Windows 11 taskbar now lets you deal with overflowing apps like a PRO!

Microsoft is also integrating support for the Print Support App (PSA) platform into Windows 10 for enterprise customers. This enables businesses to enhance the print experience by adding features and print workflows without installing any new drivers. With the release of Build 19044.1806 (KB5014666), which is accessible in the Release Preview Channel, Windows 11's new printing functionality will be coming to Windows 10. This latest update also added a new feature that will let you receive important notifications even when you will keep the Do Not Disturb mode or Focus assist on Windows 10.

Besides these, Microsoft is planning to launch a new version 22H2 Windows 10 with several new features and improvements. Earlier, Microsoft had confirmed that it will support Windows 10 until October 2025.

Sat, 06 Aug 2022 08:42:00 -0500 en text/html
Killexams : Microsoft Intros New Attack Surface Management, Threat Intel Tools

Microsoft announced two new capabilities to its Defender security tools — threat intelligence and external attack surface management.

With Microsoft Defender Threat Intelligence, security teams will have additional context, insights, and data to find attacker infrastructure and move to investigate and remediate faster, the company said in an announcement. Security teams will have access to real-time data from both Microsoft Defender and Microsoft Sentinel to proactively hunt for threats.

"Microsoft Defender Threat Intelligence maps the internet every day, providing security teams with the necessary information to understand adversaries, and their attack techniques," the company said in its announcement of the new security solutions. "Customers can access a library of raw threat intelligence detailing adversaries by name, correlating their tools, tactics, procedures (TTPs), and can see active updates within the portal as new information is distilled from Microsoft's security signals and experts."

Microsoft's Defender External Attack Surface Management helps defenders find previously invisible and unmanaged resources that can be seen and attacked from the Internet. The system scans the Internet daily to create a catalog of the environment and uncover unmanaged resources that could be potential entry points for an attacker.

"Continuous monitoring, without the need for agents or credentials, prioritizes new vulnerabilities," the company explained in a post on the Microsoft Threat Intelligence blog. "With a complete view of the organization, customers can take recommended steps to mitigate risk by bringing these unknown resources, endpoints, and assets under secure management within their SIEM and XDR tools."

Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

Tue, 02 Aug 2022 05:05:00 -0500 en text/html
Killexams : Indexing and keyword ranking techniques revisited: 20 years later

When the acorn that would become the SEO industry started to grow, indexing and ranking at search engines were both based purely on keywords.

The search engine would match keywords in a query to keywords in its index parallel to keywords that appeared on a webpage.

Pages with the highest relevancy score would be ranked in order using one of the three most popular retrieval techniques: 

  • Boolean Model
  • Probabilistic Model
  • Vector Space Model

The vector space model became the most relevant for search engines. 

I’m going to revisit the basic and somewhat simple explanation of the classic model I used back in the day in this article (because it is still relevant in the search engine mix). 

Along the way, we’ll dispel a myth or two – such as the notion of “keyword density” of a webpage. Let’s put that one to bed once and for all.

The keyword: One of the most commonly used words in information science; to marketers – a shrouded mystery

“What’s a keyword?”

You have no idea how many times I heard that question when the SEO industry was emerging. And after I’d given a nutshell of an explanation, the follow-up question would be: “So, what are my keywords, Mike?”

Honestly, it was quite difficult trying to explain to marketers that specific keywords used in a query were what triggered corresponding webpages in search engine results.

And yes, that would almost certainly raise another question: “What’s a query, Mike?”

Today, terms like keyword, query, index, ranking and all the rest are commonplace in the digital marketing lexicon. 

However, as an SEO, I believe it’s eminently useful to understand where they’re drawn from and why and how those terms still apply as much now as they did back in the day. 

The science of information retrieval (IR) is a subset under the umbrella term “artificial intelligence.”  But IR itself is also comprised of several subsets, including that of library and information science.

And that’s our starting point for this second part of my wander down SEO memory lane. (My first, in case you missed it, was: We’ve crawled the web for 32 years: What’s changed?)

This ongoing series of articles is based on what I wrote in a book about SEO 20 years ago, making observations about the state-of-the-art over the years and comparing it to where we are today.

The little old lady in the library

So, having highlighted that there are elements of library science under the Information Retrieval banner, let me relate where they fit into web search. 

Seemingly, librarians are mainly identified as little old ladies. It certainly appeared that way when I interviewed several leading scientists in the emerging new field of “web” Information Retrial (IR) all those years ago. 

Brian Pinkerton, inventor of WebCrawler, along with Andrei Broder, Vice President Technology and Chief Scientist with Alta Vista, the number one search engine before Google and indeed Craig Silverstein, Director of Technology at Google (and notably, Google employee number one) all described their work in this new field as trying to get a search engine to emulate “the little old lady in the library.” 

Libraries are based on the concept of the index card – the original purpose of which was to attempt to organize and classify every known animal, plant, and mineral in the world.

Index cards formed the backbone of the entire library system, indexing vast and varied amounts of information. 

Apart from the name of the author, title of the book, subject matter and notable “index terms” (a.k.a., keywords), etc., the index card would also have the location of the book. And therefore, after a while “the little old lady librarian” when you asked her about a particular book, would intuitively be able to point not just to the section of the library, but probably even the shelf the book was on, providing a personalized rapid retrieval method.

However, when I explained the similarity of that type of indexing system at search engines as I did all those years back, I had to add a caveat that’s still important to grasp:

“The largest search engines are index based in a similar manner to that of a library. Having stored a large fraction of the web in massive indices, they then need to quickly return relevant documents against a given keyword or phrase. But the variation of web pages, in terms of composition, quality, and content, is even greater than the scale of the raw data itself. The web as a whole has no unifying structure, with an enormous variant in the style of authoring and content far wider and more complex than in traditional collections of text documents. This makes it almost impossible for a search engine to apply strictly conventional techniques used in libraries, database management systems, and information retrieval.”

Inevitably, what then occurred with keywords and the way we write for the web was the emergence of a new field of communication.

As I explained in the book, HTML could be viewed as a new linguistic genre and should be treated as such in future linguistic studies. There’s much more to a Hypertext document than there is to a “flat text” document. And that gives more of an indication to what a particular web page is about when it is being read by humans as well as the text being analyzed, classified, and categorized through text mining and information extraction by search engines.

Sometimes I still hear SEOs referring to search engines “machine reading” web pages, but that term belongs much more to the relatively latest introduction of “structured data” systems.

As I frequently still have to explain, a human practicing a web page and search engines text mining and extracting information “about” a page is not the same thing as humans practicing a web page and search engines being” fed” structured data.

The best tangible example I’ve found is to make a comparison between a modern HTML web page with inserted “machine readable” structured data and a modern passport. Take a look at the picture page on your passport and you’ll see one main section with your picture and text for humans to read and a separate section at the bottom of the page, which is created specifically for machine practicing by swiping or scanning.

Quintessentially, a modern web page is structured kind of like a modern passport. Interestingly, 20 years ago I referenced the man/machine combination with this little factoid:

“In 1747 the French physician and philosopher Julien Offroy de la Mettrie published one of the most seminal works in the history of ideas. He entitled it L’HOMME MACHINE, which is best translated as “man, a machine.” Often, you will hear the phrase ‘of men and machines’ and this is the root idea of artificial intelligence.”

I emphasized the importance of structured data in my previous article and do hope to write something for you that I believe will be hugely helpful to understand the balance between humans practicing and machine reading. I totally simplified it this way back in 2002 to provide a basic rationalization:

  • Data: a representation of facts or ideas in a formalized manner, capable of being communicated or manipulated by some process.
  • Information: the meaning that a human assigns to data by means of the known conventions used in its representation.


  • Data is related to facts and machines.
  • Information is related to meaning and humans.

Let’s talk about the characteristics of text for a minute and then I’ll cover how text can be represented as data in something “somewhat misunderstood” (shall we say) in the SEO industry called the vector space model.

The most important keywords in a search engine index vs. the most popular words

Ever heard of Zipf’s Law?

Named after Harvard Linguistic Professor George Kingsley Zipf, it predicts the phenomenon that, as we write, we use familiar words with high frequency. 

Zipf said his law is based on the main predictor of human behavior: striving to minimize effort. Therefore, Zipf’s law applies to almost any field involving human production.

This means we also have a constrained relationship between rank and frequency in natural language.

Most large collections of text documents have similar statistical characteristics. Knowing about these statistics is helpful because they influence the effectiveness and efficiency of data structures used to index documents. Many retrieval models rely on them.

There are patterns of occurrences in the way we write – we generally look for the easiest, shortest, least involved, quickest method possible. So, the truth is, we just use the same simple words over and over.

As an example, all those years back, I came across some statistics from an experiment where scientists took a 131MB collection (that was big data back then) of 46,500 newspaper articles (19 million term occurrences).

Here is the data for the top 10 words and how many times they were used within this corpus. You’ll get the point pretty quickly, I think:

Word frequency
the: 1130021
of 547311
to 516635
a 464736
in 390819
and 387703
that 204351
for 199340
is 152483
said 148302 

Remember, all the articles included in the corpus were written by professional journalists. But if you look at the top ten most frequently used words, you could hardly make a single sensible sentence out of them. 

Because these common words occur so frequently in the English language, search engines will ignore them as “stop words.” If the most popular words we use don’t provide much value to an automated indexing system, which words do? 

As already noted, there has been much work in the field of information retrieval (IR) systems. Statistical approaches have been widely applied because of the poor fit of text to data models based on formal logics (e.g., relational databases).

So rather than requiring that users will be able to anticipate the exact words and combinations of words that may appear in documents of interest, statistical IR lets users simply enter a string of words that are likely to appear in a document.

The system then takes into account the frequency of these words in a collection of text, and in individual documents, to determine which words are likely to be the best clues of relevance. A score is computed for each document based on the words it contains and the highest scoring documents are retrieved.

I was fortunate enough to Interview a leading researcher in the field of IR when researching myself for the book back in 2001. At that time, Andrei Broder was Chief Scientist with Alta Vista (currently Distinguished Engineer at Google), and we were discussing the syllabu of “term vectors” and I asked if he could provide me a simple explanation of what they are.

He explained to me how, when “weighting” terms for importance in the index, he may note the occurrence of the word “of” millions of times in the corpus. This is a word which is going to get no “weight” at all, he said. But if he sees something like the word “hemoglobin”, which is a much rarer word in the corpus, then this one will get some weight.

I want to take a quick step back here before I explain how the index is created, and dispel another myth that has lingered over the years. And that’s the one where many people believe that Google (and other search engines) are actually downloading your web pages and storing them on a hard drive.

Nope, not at all. We already have a place to do that, it’s called the world wide web.

Yes, Google maintains a “cached” snapshot of the page for rapid retrieval. But when that page content changes, the next time the page is crawled the cached version changes as well.

That’s why you can never find copies of your old web pages at Google. For that, your only real resource is the Internet Archive (a.k.a., The Wayback Machine). 

In fact, when your page is crawled it’s basically dismantled. The text is parsed (extracted) from the document.

Each document is given its own identifier along with details of the location (URL) and the “raw data” is forwarded to the indexer module. The words/terms are saved with the associated document ID in which it appeared.

Here’s a very simple example using two Docs and the text they contain that I created 20 years ago.

Recall index construction

After all the documents have been parsed, the inverted file is sorted by terms:

In my example this looks fairly simple at the start of the process, but the postings (as they are known in information retrieval terms) to the index go in one Doc at a time. Again, with millions of Docs, you can imagine the amount of processing power required to turn this into the massive ‘term wise view’ which is simplified above, first by term and then by Doc within each term.

You’ll note my reference to “millions of Docs” from all those years ago. Of course, we’re into billions (even trillions) these days. In my basic explanation of how the index is created, I continued with this:

Each search engine creates its own custom dictionary (or lexicon as it is – remember that many web pages are not written in English), which has to include every new ‘term’ discovered after a crawl (think about the way that, when using a word processor like Microsoft Word, you frequently get the option to add a word to your own custom dictionary, i.e. something which does not occur in the standard English dictionary). Once the search engine has its ‘big’ index, some terms will be more important than others. So, each term deserves its own weight (value). A lot of the weighting factor depends on the term itself. Of course, this is fairly straight forward when you think about it, so more weight is given to a word with more occurrences, but this weight is then increased by the ‘rarity’ of the term across the whole corpus. The indexer can also provide more ‘weight’ to words which appear in certain places in the Doc. Words which appeared in the title tag <title> are very important. Words which are in <h1> headline tags or those which are in bold <b> on the page may be more relevant. The words which appear in the anchor text of links on HTML pages, or close to them are certainly viewed as very important. Words that appear in <alt> text tags with images are noted as well as words which appear in meta tags.

Apart from the original text “Modern Information Retrieval” written by the scientist Gerard Salton (regarded as the father of modern information retrieval) I had a number of other resources back in the day who Verified the above. Both Brian Pinkerton and Michael Maudlin (inventors of the search engines WebCrawler and Lycos respectively) gave me details on how “the classic Salton approach” was used. And both made me aware of the limitations.

Not only that, Larry Page and Sergey Brin highlighted the very same in the original paper they wrote at the launch of the Google prototype. I’m coming back to this as it’s important in helping to dispel another myth.

But first, here’s how I explained the “classic Salton approach” back in 2002. Be sure to note the reference to “a term weight pair.”

Once the search engine has created its ‘big index’ the indexer module then measures the ‘term frequency’ (tf) of the word in a Doc to get the ‘term density’ and then measures the ‘inverse document frequency’ (idf) which is a calculation of the frequency of terms in a document; the total number of documents; the number of documents which contain the term. With this further calculation, each Doc can now be viewed as a vector of tf x idf values (binary or numeric values corresponding directly or indirectly to the words of the Doc). What you then have is a term weight pair. You could transpose this as: a document has a weighted list of words; a word has a weighted list of documents (a term weight pair).

The Vector Space Model

Now that the Docs are vectors with one component for each term, what has been created is a ‘vector space’ where all the Docs live. But what are the benefits of creating this universe of Docs which all now have this magnitude?

In this way, if Doc ‘d’ (as an example) is a vector then it’s easy to find others like it and also to find vectors near it.

Intuitively, you can then determine that documents, which are close together in vector space, talk about the same things. By doing this a search engine can then create clustering of words or Docs and add various other weighting methods.

However, the main benefit of using term vectors for search engines is that the query engine can regard a query itself as being a very short Doc. In this way, the query becomes a vector in the same vector space and the query engine can measure each Doc’s proximity to it.

The Vector Space Model allows the user to query the search engine for “concepts” rather than a pure “lexical” search. As you can see here, even 20 years ago the notion of concepts and syllabus as opposed to just keywords was very much in play.

OK, let’s tackle this “keyword density” thing. The word “density” does appear in the explanation of how the vector space model works, but only as it applies to the calculation across the entire corpus of documents – not to a single page. Perhaps it’s that reference that made so many SEOs start using keyword density analyzers on single pages.

I’ve also noticed over the years that many SEOs, who do discover the vector space model, tend to try and apply the classic tf x idf term weighting. But that’s much less likely to work, particularly at Google, as founders Larry Page and Sergey Brin stated in their original paper on how Google works – they emphasize the poor quality of results when applying the classic model alone:

“For example, the standard vector space model tries to return the document that most closely approximates the query, given that both query and document are vectors defined by their word occurrence. On the web, this strategy often returns very short documents that are only the query plus a few words.”

There have been many variants to attempt to get around the ‘rigidity’ of the Vector Space Model. And over the years with advances in artificial intelligence and machine learning, there are many variations to the approach which can calculate the weighting of specific words and documents in the index.

You could spend years trying to figure out what formulae any search engine is using, let alone Google (although you can be sure which one they’re not using as I’ve just pointed out). So, bearing this in mind, it should dispel the myth that trying to manipulate the keyword density of web pages when you create them is a somewhat wasted effort.

Solving the abundance problem

The first generation of search engines relied heavily on on-page factors for ranking.

But the problem you have using purely keyword-based ranking techniques (beyond what I just mentioned about Google from day one) is something known as “the abundance problem” which considers the web growing exponentially every day and the exponential growth in documents containing the same keywords.

And that poses the question on this slide which I’ve been using since 2002:

If a music student has a web page about Beethoven’s Fifth Symphony and so does a world-famous orchestra conductor (such as Andre Previn), who would you expect to have the most authoritative page?

You can assume that the orchestra conductor, who has been arranging and playing the piece for many years with many orchestras, would be the most authoritative. But working purely on keyword ranking techniques only, it’s just as likely that the music student could be the number one result.

How do you solve that problem?

Well, the answer is hyperlink analysis (a.k.a., backlinks).

In my next installment, I’ll explain how the word “authority” entered the IR and SEO lexicon. And I’ll also explain the original source of what is now referred to as E-A-T and what it’s actually based on.

Until then – be well, stay safe and remember what joy there is in discussing the inner workings of search engines!

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

New on Search Engine Land

About The Author

Mike Grehan is an SEO pioneer (online since 1995), author, world-traveler and keynote speaker, Champagne connoisseur and consummate drinking partner to the global digital marketing community. He is former publisher of Search Engine Watch and ClickZ, and producer of the industry’s largest search and social marketing event, SES Conference & Expo. Proud to have been chairman of SEMPO the largest global trade association for search marketers. And equally proud to be SVP of corporate communications, NP Digital. He also is the creator of Search Engine Stuff, a streaming TV show/podcast featuring news and views from industry experts.

Wed, 03 Aug 2022 22:00:00 -0500 Mike Grehan en text/html
Killexams : Tips to prevent RDP and other remote attacks on Microsoft networks

One long-favored way that ransomware enters your system is through Microsoft’s Remote Desktop Protocol (RDP) attacks. Years ago when we used Microsoft’s Terminal Services (from which RDP evolved) for shared remote access inside or outside of an office, attackers would use a tool called TSGrinder. It would first review a network for Terminal Services traffic on port 3389. Then attackers would use tools to guess the password to gain network access. They would go after administrator accounts first. Even if we changed the administrator account name or moved the Terminal Services protocol to another port, attackers would often sniff the TCP/IP traffic and identify where it was moved to.

Attackers still go after our remote access, this time via RDP. With human-operated ransomware techniques, attackers gain access and then use higher privileges to gain more access in a network. You have several ways to protect your network from brute-force or other targeted remote attacks. 

Use administrator accounts with blank passwords

Believe it or not, one way to block such attacks is to have a blank password for the administrator account. Using the Group Policy setting “Accounts: Limit local account use of blank passwords to console logon only” blocks the ability for anyone to remote into the network with a blank password. While this clearly is not an ideal protection, it’s an interesting one that’s been available in Group Policy since Server 2003.

Set Windows 11 lockout policies

Included in the Insider releases of Windows 11 and ultimately coming to Windows 11 22H2 will be a new policy that will set a more granular lockout policy than we currently have with Windows 10 or server platforms. The lockout policy in Windows 10 and Windows 11 appears as follows:

You get three policies: “Account locker duration”, “Account lockout threshold”, and one to reset account lockout counter after a set number of minutes.

Windows 11 22H2 will ship with one more policy setting and with the following defaults:

Copyright © 2022 IDG Communications, Inc.

Tue, 02 Aug 2022 21:00:00 -0500 en text/html
Killexams : Microsoft: Windows, Adobe Zero-Day Used To Hack Windows Users

The Microsoft Threat Intelligence Center (MSTIC) and the Microsoft Security Response Center (MSRC) on Wednesday claimed that they found an Austrian-based private-sector offensive actor (PSOA) exploiting multiple Windows and Adobe 0-day exploits in “limited and targeted attacks” against European and Central American customers.

For the unversed, PSOAs are private companies that manufacture and sell cyberweapons in hacking-as-a-service packages, often to government agencies around the world, to hack into their targets’ computers, phones, network infrastructure, and other devices.

The Austrian-based PSOA named DSIRF, which Microsoft had dubbed Knotweed, has been linked to the development and attempted sale of a malware toolset called “Subzero”.

DSIRF promotes itself on the website as a company that provides “mission-tailored services in the fields of information research, forensics as well as data-driven intelligence to multinational corporations in the technology, retail, energy, and financial sectors” and have “a set of highly sophisticated techniques in gathering and analyzing information.”

The Redmond giant said the Austria-based DSIRF falls into a group of cyber mercenaries that sell hacking tools or services through a variety of business models. Two common models for this type of actor are access-as-a-service and hack-for-hire.

MSTIC found that the Subzero malware was being circulated on computers through a variety of methods, including 0-day exploits in Windows and Adobe Reader, in the years, 2021 and 2022.

As part of its investigation into the utility of this malware, Microsoft’s communications with a Subzero victim revealed that they had not authorized any red teaming or penetration testing, and confirmed that it was unauthorized, malicious activity.

“Observed victims to date include law firms, banks, and strategic consultancies in countries such as Austria, the United Kingdom, and Panama. It’s important to note that the identification of targets in a country doesn’t necessarily mean that a DSIRF customer resides in the same country, as international targeting is common,” Microsoft wrote in a detailed blog post.

“MSTIC has found multiple links between DSIRF and the exploits and malware used in these attacks. These include command-and-control infrastructure used by the malware directly linking to DSIRF, a DSIRF-associated GitHub account being used in one attack, a code signing certificate issued to DSIRF being used to sign an exploit, and other open-source news reports attributing Subzero to DSIRF.”

In May 2022, Microsoft detected an Adobe Reader remote code execution (RCE) and a 0-day Windows privilege escalation exploit chain being used in an attack that led to the deployment of Subzero.

“The exploits were packaged into a PDF document that was sent to the victim via email. Microsoft was not able to acquire the PDF or Adobe Reader RCE portion of the exploit chain, but the victim’s Adobe Reader version was released in January 2022, meaning that the exploit used was either a 1-day exploit developed between January and May, or a 0-day exploit,” the company explained.

Based on DSIRF’s extensive use of additional zero-days, Microsoft believes that the Adobe Reader RCE was indeed a zero-day exploit. The Windows exploit was analyzed by MSRC, found to be a 0-day exploit, and then patched in July 2022 as CVE-2022-22047 in the Windows Client/Server Runtime Subsystem (csrss.exe).

The Austrian company’s exploits are also being linked to previous two Windows privilege escalation exploits (CVE-2021-31199 and CVE-2021-31201) being used in conjunction with an Adobe Reader exploit (CVE-2021-28550), all of which were patched in June 2021.

In 2021, the cyber mercenary group was also linked to the exploitation of a fourth zero-day, a Windows privilege escalation flaw in the Windows Update Medic Service (CVE-2021-36948), which allowed an attacker to force the service to load an arbitrary signed DLL.

To mitigate against such attacks, Microsoft has recommended its customers to:

  • Prioritize patching of CVE-2022-22047.
  • Confirm that Microsoft Defender Antivirus is updated to security intelligence update 1.371.503.0 or later to detect the related indicators.
  • Use the included indicators of compromise to investigate whether they exist in your environment and assess for potential intrusion.
  • Change Excel macro security settings to control which macros run and under what circumstances when you open a workbook. Customers can also stop malicious XLM or VBA macros by ensuring runtime macro scanning by Antimalware Scan Interface (AMSI) is on.
  • Enable multifactor authentication (MFA) to mitigate potentially compromised credentials and ensure that MFA is enforced for all remote connectivity.
  • Review all authentication activity for remote access infrastructure, focusing on accounts configured with single-factor authentication, to confirm the authenticity and investigate any abnormal activity.

Besides using technical means to disrupt Knotweed, Microsoft has also submitted written testimony to the House Permanent Select Committee on Intelligence Hearing on “Combatting the Threats to U.S. National Security from the Proliferation of Foreign Commercial Spyware.”

Wed, 27 Jul 2022 21:47:00 -0500 Kavita Iyer en-US text/html
Killexams : Microsoft announces two new Defender products for businesses

Microsoft has announced two new products under the Microsoft Defender lineup. Designed for businesses, and powered by their acquisition of cyber security company RiskIQ, Microsoft hopes the new Defender Threat Intelligence, and Defender External Attack Surface Management can aim to help companies reduce their chances of getting hit by cyber-attacks.

We'll begin by getting into the details of Microsoft Defender Threat Intelligence. This new product is all about offering real-time data from Microsoft's security signals. It builds on the real-time detections of Microsoft Sentinel. and lets organizations hunt for threats more broadly, and helps them uncover more adversaries. According to Microsoft:

Microsoft Defender Threat Intelligence maps the internet every day, providing security teams with the necessary information to understand adversaries and their attack techniques. Customers can access a library of raw threat intelligence detailing adversaries by name, correlating their tools, tactics, and procedures (TTPs), and can see active updates within the portal as new information is distilled from Microsoft’s security signals and experts. Defender Threat Intelligence lifts the veil on the attacker and threat family behavior and helps security teams find, remove, and block hidden adversary tools within their organization.

Now, for Microsoft Defender External Attack Surface Management. This second product under the Microsoft Defender offering is all about helping businesses see their operations the way an attacker can. More specifically, security teams can use this product to see and discover unmanaged resources that are visible from the internet, which is what an attacker might usually see. In Microsoft's words:

Microsoft Defender External Attack Surface Management scans the internet and its connections every day. This builds a complete catalog of a customer’s environment, discovering internet-facing resources—even the agentless and unmanaged assets. Continuous monitoring, without the need for agents or credentials, prioritizes new vulnerabilities.

Microsoft says these new offerings come at a time when ransomware losses are hitting businesses hard. The company is citing an FBI report which found that these types of losses totaled more than $50 million, with total cybercrimes costing 6.9 billion. Defender Threat Intelligence and Defender External Attack Surface Management should help reduce these threats.

Share This Post:

Tue, 02 Aug 2022 02:17:00 -0500 Arif Bacchus en-US text/html
Killexams : Microsoft email users targeted in new phishing campaign that can bypass MFA

You are currently accessing Computing via your Enterprise account.

If you already have an account please use the link below to sign in.

If you have any problems with your access or would like to request an individual access account please contact our customer service team.

Phone: +44 (0) 1858 438800


Thu, 04 Aug 2022 19:55:00 -0500 en text/html
Killexams : Microsoft goes all-in on threat intelligence and launches two new products

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Today’s threat landscape is an unforgiving place. With 1,862 publicly disclosed data breaches in 2021, security teams are looking for new ways to work smarter, rather than harder.  

With an ever-growing number of vulnerabilities and sophisticated threat vectors, security professionals are slowly turning to threat intelligence to develop insights into Tactics, Techniques and Procedures (TTPs) and exploits they can use to proactively harden their organization’s defenses against cybercriminals. 

In fact, research shows that the number of organizations with dedicated threat intelligence teams has increased from 41.1% in 2019 to 47.0% in 2022. 

Microsoft is one of the key providers capitalizing on this trend. Just over a year ago, it acquired cyberrisk intelligence provider RiskIQ. Today, Microsoft announced the release of two new products: Microsoft Defender Threat Intelligence (MDTI) and Microsoft External Attack Surface Management. 

The former will provide enterprises with access to real-time threat intelligence updated on a daily basis, while the latter scans the internet to discover agentless and unmanaged internet-facing assets to provide a comprehensive view of the attack surface. 

Using threat intelligence to navigate the security landscape  

One of the consequences of living in a data-driven era is that organizations need to rely on third-party apps and services that they have little visibility over. This new attack surface, when combined with the vulnerabilities of the traditional on-site network, is very difficult to manage. 

Threat intelligence helps organizations respond to threats in this environment because it provides a heads-up on the TTPs and exploits that threat actors use to gain entry to enterprise environments.

As Gartner explains, threat intelligence solutions aim “to provide or assist in the curation of information about the identities, motivations, characteristics and methods of threats, commonly referred to as tactics, techniques and procedures (TTPs).” 

Security teams can leverage the insights obtained from threat intelligence to enhance their prevention and detection capabilities, increasing the effectiveness of processes including incident response, threat hunting and vulnerability management. 

“MDTI maps the internet every day, forming a picture of every observed entity or resource and how they are connected. This daily analysis means changes in infrastructure and connections can be visualized,” said CVP of security, compliance, identity and privacy, Vasu Jakkal. 

“Adversaries and their toolkits can effectively be ‘fingerprinted’ and the machines, IPs, domains and techniques used to attack targets can be monitored. MDTI possesses thousands of ‘articles’ detailing these threat groups and how they operate, as well as a wealth of historical data,” Jakkal said. 

In short, the organization aims to equip security teams with the insights they need to enhance their security strategies and protect their attack surface across the Microsoft product ecosystem against malware and ransomware threats.

Evaluating the threat intelligence market 

The announcement comes as the global threat intelligence market is steadily growing, with researchers expecting an increase from $11.6 billion in 2021 to reach a total of $15.8 billion by 2026. 

One of Microsoft’s main competitors in the space is IBM, with X-Force Exchange, a threat-intelligence sharing platform, where security professionals can search or submit files to scan, and gain access to the threat intelligence submitted by other users. IBM recently announced raising revenue of $16.7 billion. 

Another competitor is Anomali, with ThreatStream, an AI-powered threat intelligence management platform designed to automatically collect and process data across hundreds of threat sources. Anomali most recently raised $40 million in funding as part of a series D funding round in 2018. 

Other competitors in the market include Palo Alto Networks‘ WildFire, ZeroFOX platform, and Mandiant Advantage Threat Intelligence. 

Given the widespread adoption of Microsoft devices among enterprise users, the launch of a new threat intelligence service has the potential to help security teams against the biggest threats to the provider’s product ecosystem.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Tue, 02 Aug 2022 08:00:00 -0500 Tim Keary en-US text/html
Killexams : Hackers Opting New Attack Methods After Microsoft Blocked Macros by Default

With Microsoft taking steps to block Excel 4.0 (XLM or XL4) and Visual Basic for Applications (VBA) macros by default across Office apps, malicious actors are responding by refining their tactics, techniques, and procedures (TTPs).

"The use of VBA and XL4 Macros decreased approximately 66% from October 2021 through June 2022," Proofpoint said in a report shared with The Hacker News, calling it "one of the largest email threat landscape shifts in latest history."

In its place, adversaries are increasingly pivoting away from macro-enabled documents to other alternatives, including container files such as ISO and RAR as well as Windows Shortcut (LNK) files in campaigns to distribute malware.

"Threat actors pivoting away from directly distributing macro-based attachments in email represents a significant shift in the threat landscape," Sherrod DeGrippo, vice president of threat research and detection at Proofpoint, said in a statement.

"Threat actors are now adopting new tactics to deliver malware, and the increased use of files such as ISO, LNK, and RAR is expected to continue."

VBA macros embedded in Office documents sent via phishing emails have proven to be an effective technique in that it allows threat actors to automatically run malicious content after tricking a recipient into enabling macros via social engineering tactics.

However, Microsoft's plans to block macros in files downloaded from the internet have led to email-based malware campaigns experimenting with other ways to bypass Mark of the Web (MOTW) protections and infect victims.

This involves the use of ISO, RAR and LNK file attachments, which have surged nearly 175% during the same period. At least 10 threat actors are said to have begun using LNK files since February 2022.

"The number of campaigns containing LNK files increased 1,675% since October 2021," the enterprise security company noted, adding the number of attacks using HTML attachments more than doubled from October 2021 to June 2022.

Some of the notable malware families distributed through these new methods consist of Emotet, IcedID, Qakbot, and Bumblebee.

"Generally speaking, these other file types are directly attached to an email in the same way we would previously observe a macro-laden document," DeGrippo told The Hacker News in an emailed response.

"There are also cases where the attack chains are more convoluted, for example, with some latest Qbot campaigns where a .ZIP containing an ISO is embedded within an HTML file directly attached to a message."

"As for getting intended victims to open and click, the methods are the same: a wide array of social engineering tactics to get people to open and click. The preventive measures we use for phishing still apply here."

Found this article interesting? Follow THN on Facebook, Twitter and LinkedIn to read more exclusive content we post.
Thu, 28 Jul 2022 18:08:00 -0500 Ravie Lakshmanan en text/html
Killexams : Microsoft uncovers group that used previously unknown zero days, spyware to target Windows

The Microsoft Threat Intelligence Center (MSTIC) along with the Microsoft Security Response Center (MSRC) published a blog post identifying and detailing the malware exploits of an Austrian-based group as KNOTWEED.

According to the joint MSTIC and MSRC report, a private-sector offensive actor (PSOA) has been using multiple Windows and Adobe Zero-day exploits to develop and sell malware dubbed Subzero to attack banks, consultancy, agencies and law firms in European and Central American regions.

In its technical blog post, which is being used as written testimony submitted to the US House Intelligence Committee this week, Microsoft details the actions of DSIRF which is the official name of developers of KNOTWEED.


Despite DSIRF claims of legitimacy as a multinational risk analysis business that makes use of "a set of highly sophisticated techniques in gathering and analyzing information", Microsoft has surveilled and tagged the bad actor as a distributor of spyware intended for unauthorized surveillance.

Multiple news reports have linked DSIRF to the malware toolset Subzero which took advantage of Zero-day exploits in Windows and Adobe Reader, in 2021 and 2022.

In May 2022, MSTIC found an Adobe Reader remote code execution (RCE) and a 0-day Windows privilege escalation exploit chain being used in an attack that led to the deployment of Subzero. The exploits were packaged into a PDF document that was sent to the victim via email. Microsoft was not able to acquire the PDF or Adobe Reader RCE portion of the exploit chain, but the victim’s Adobe Reader version was released in January 2022, meaning that the exploit used was either a 1-day exploit developed between January and May, or a 0-day exploit. Based on KNOTWEED’s extensive use of other 0-days, we assess with medium confidence that the Adobe Reader RCE is a 0-day exploit. The Windows exploit was analyzed by MSRC, found to be a 0-day exploit, and then patched in July 2022 as CVE-2022-22047. Interestingly, there were indications in the Windows exploit code that it was also designed to be used from Chromium-based browsers, although we’ve seen no evidence of browser-based attacks.

Microsoft also details KNOTWEED exploits that involve Subzero disguising itself as an Excel file in real estate documents. "The file contained a malicious macro that was obfuscated with large chunks of benign comments from the Kama Sutra, string obfuscation, and use of Excel 4.0 macros."

Enable excel macros

Fortunately, Microsoft has been able to implement protections since identifying KNOTWEED but advises users to be on the lookout for other behaviors of known and unknown malware that include examining directories such as C:\Windows\System32\spool\drivers\color\ where legitimate programs my inadvertently house spyware.

If digging through registries is too in the woods for some, Microsoft also suggests some more practical high-level options such as prioritizing patching of CVE-2022-22047 when it hits machines, making sure Microsoft Defender Antivirus is up to date, changing Excel macro security settings, enabling multifactor authentication (MFA) and reviewing authentication activity from remote access infrastructures regularly.

Share This Post:

Thu, 28 Jul 2022 02:54:00 -0500 Kareem Anderson en-US text/html
MS-203 exam dump and training guide direct download
Training Exams List