100% free Pass4sure 9L0-619 exam prep and 100% valid test questions.

Basically memorizing our Mac OS X Deployment v10.5 PDF Download together with success is secured for the 9L0-619 test. You might pass the test with excessive marks or the money back. We certainly have fully tested together with verified, valid 9L0-619 exam questions via actual test to have prepared and gone away 9L0-619 test at typically the first attempt. Fundamentally download our VCE Exam Simulator together with practice. You are going to pass the 9L0-619 test.

Exam Code: 9L0-619 Practice exam 2022 by Killexams.com team
Mac OS X Deployment v10.5
Apple Deployment questions
Killexams : Apple Deployment questions - BingNews https://killexams.com/pass4sure/exam-detail/9L0-619 Search results Killexams : Apple Deployment questions - BingNews https://killexams.com/pass4sure/exam-detail/9L0-619 https://killexams.com/exam_list/Apple Killexams : Yes, you can outsource hybrid workplace disruption No result found, try new keyword!The Apple-in-the-enterprise space has begun to create opportunities for companies willing to look for smart answers to emerging pain points around the future of work. Thu, 06 Oct 2022 00:17:00 -0500 en text/html https://www.computerworld.com/ Killexams : Mosyle aims to Excellerate enterprise Mac deployment for new users No result found, try new keyword!Mosyle Embark is designed to keep onboarding employees informed as Macs are configured for use. Company CEO Alcyr Araujo sees Apple adoption 'accelerating.' Mosyle is the latest Apple-in-the ... Wed, 05 Oct 2022 10:45:00 -0500 en text/html https://www.computerworld.com/ Killexams : Apple Vs. Microsoft Vs. Treasury Bonds: The Battle Of Safe Havens Round-3
Padlock on hundred dollar bill

Aslan Alphan


Since my last update on the "Battle of Safe Havens" on 25th August 2022, Apple (NASDAQ:AAPL), Microsoft (NASDAQ:MSFT), and Treasury bonds have experienced a swift decline in price due to rapidly rising interest rates and a worsening macroeconomic environment.

Here's our past coverage of this intriguing battle:

Apple Microsoft performance


Treasury rates


In my previous note, I highlighted how slowing revenue growth and contracting margins at BigTech companies were making their valuations untenable in a rising interest rate environment. With the risk-free treasury rate (of 3.5-4%) higher than the free cash flow yield offered by so-called safe haven stocks like Apple and Microsoft, there is a lack of equity risk premium. This is a breach of the immutable laws of money.

With the 10-yr treasury at 4%, one could argue that high-quality businesses like Apple and Microsoft deserve a Price-to-Earnings multiple of ~20-25x (equity risk premium of 0-1%). And by this logic, Apple and Microsoft seem fairly valued right now.

Apple vs Microsoft Earning multiples


However, evidence suggests that the 'E' (earning) could be about to contract in upcoming quarters. Last Friday, Advanced Micro Devices (AMD) pre-announced its Q3 results, and it was an absolute shocker for investors. AMD is gaining share in the PC market, and still, its PC revenues (distributed across Client and Gaming business lines) are down significantly in Q3. Now, some customers might be waiting for AMD's upcoming Zen4 devices, but the slowdown in PC markets is pronounced.

AMD Q3 Preliminary Results

AMD Investor Relations

As you may know, Apple and Microsoft have significant exposure to PC markets, and the pull forward from COVID could result in a sizeable hit to their topline in upcoming quarters. Before AMD, Micron (MU) and Nike (NKE) announced decent quarterly numbers, but an inventory problem is set to hurt margins in Q4 for both companies. With the Fed hellbent on fighting inflation, the threat of recession looms large. An earnings recession is coming, and even the likes of Apple and Microsoft are not immune to the broader economy. If (more like when) earnings estimates for Q4 and 2023 are revised lower, we will see another leg down in BigTech stocks (and, by extension, broader equity markets).

Despite significant valuation moderation, the near to medium-term risk/reward for Apple and Microsoft is still unfavorable for investors. Here are TQI's fair value estimates and projected returns for Apple and Microsoft:

Stock Price TQI Fair Value Estimate Next 5-yr CAGR Return (%)
Apple $140 $105.98 13.26%
Microsoft $234 $156.27 10.34%

Now, many DGI investors would happily accept double-digit CAGR returns, and if you are such an investor, buying Apple and Microsoft here is fine. At TQI, our investment hurdle rate is 15%, and since we are not getting that (just yet), I am still 'Neutral' on Apple and Microsoft.

What Do The Charts Tell Us?

Since Fed's hawkish pivot in Nov-21, broad market indices have entered a correction. In a rising interest rate environment, high-flying tech stocks have come under immense selling pressure. The Nasdaq-100 index [tracked by QQQ ETF (QQQ)] is re-testing June lows, and a breakdown of these lows could result in a decline to the pre-COVID range of $215-235 (for QQQ).

Nasdaq-100 index Moving Average

WeBull Desktop

Microsoft is a significant component of broad market indices like the QQQ and SPY, which means its price action tends to be similar to what we see in the broad market. Unfortunately, Microsoft has already broken below its June lows and is now looking nailed on to test the pre-COVID level of $210. My fair value estimate for Microsoft is only $156, and so, I am unlikely to turn into a buyer at $210, either. For now, Microsoft's stock is firmly entrenched in a downward falling wedge pattern, and I won't rule out a decline to the mid-100s. And that's where I would like to buy more MSFT shares.

MSFT moving average

WeBull Desktop

Apple is a bellwether stock, and while most tech stocks are falling in downward wedge patterns, Apple's stock chart is looking like a descending broadening wedge, which is a bullish continuation pattern.

AAPL moving average

WeBull Desktop

Technically, Apple is experiencing a correction, and it will likely move higher in the long term. However, in the near term, Apple looks set to re-test its June lows of $130, and if it breaks this key level, Apple could be headed down to its fair value of ~$105 (which is also the 200DMA level).

Considering the medium-term risk/reward [25-40% downside risk vs. 10-13% CAGR returns] for Apple and Microsoft, I rate both of them 'Neutral or Avoid or Hold' at current levels.

Bonds Are Now Looking Attractive

In order to fight persistently-high inflation, central banks across the globe have adopted quantitative tightening programs, which include interest rate hikes and liquidity withdrawal through balance sheet roll-off. The risk-free treasury rates in the US are now in the 3.5-4% range, and if the Fed sticks to its rate hike path, we could be headed even higher in 2023. After more than a decade, bonds are a real alternative to equities.

In the past, treasury yields have risen beyond the CPI inflation rate during periods of high inflation; however, this ongoing rate hike cycle may be close to peaking out as concerns around financial stability are growing and assets are deflating across the board.

Holding cash is not ideal if you plan to deploy this cash at a certain time in the future. And so parking it in highly-liquid, risk-free assets is a smart move. For those looking to invest in bonds, I want to share a Cash or Treasury management strategy.

A bond ladder is a collection of bonds with different maturities. Such an investment strategy is devised to get assured periodic cash flows. For example, we can invest in ten US treasury notes/bonds with a term length of 1, 2, 3, ... 10 years. Every year one bond matures, and that cash flow can be used as per need. For our investing operations at TQI, we are using T-bills such that one matures each month. In the case of our GARP portfolio, we had $45K (~43% of AUM) in cash that we planned to deploy over the next nine months. Here, we bought T-bills of $5K each with maturity/term lengths of 1 to 9 months. So, instead of $5K, we will have a somewhat greater amount to invest at the time of our planned bi-weekly capital deployments.

Building a bond ladder is simple, but if you have any questions, please share them in the comments section below.

Final Thoughts

We concluded our last update on "The Battle of Safe Havens" in the following manner:

According to the definition, a bear market ends with a 20% bounce off of lows, and we got this in latest weeks. Hence, by definition, the bear market is over, and a new bull market has started. However, I think it is still too early to call a bottom. A tighter monetary policy could lead to a growth slowdown and cause a recession. Despite the growing clamor for a Fed pivot, I still think inflation is too high, and the Fed will need to keep going for some time to come. The markets may go up with rates (as this has happened in the past), but these tightening cycles often lead to something breaking in the economy and eventually a crash in the stock market. Will this time be any different? I don't know.

I don't know where the market is headed next; nobody else knows either. The macro-environment remains challenging, and the Fed's QT [quantitative tightening] program is just getting started. With Apple and Microsoft trading at lofty valuations despite an evident slowdown in revenue growth and significant moderation in operating margins, I think the near to medium-term risk/reward from current levels is unfavorable for bulls. Yes, there are tons of opportunities in beaten-down growth stocks, but if the large caps get hit (in an earnings recession), the smaller cap stocks will likely continue to remain under pressure. Hence, I plan to stick with The Quantamental Investor's playbook for a bear market environment -

"Build long positions slowly using DCA plans, and manage risk proactively."

Source: Apple Vs. Microsoft Vs. Treasury Bonds: The Battle Of Safe Havens Round-2

How Are We Investing In These Uncertain Markets At The Quantamental Investor?

As we have seen in the "Battle of Safe Havens" series, traditional safe-haven stocks like Apple and Microsoft are not so safe for the near to medium term.

As of today, the entire June-August rally has been reversed, and it is fair to say this move was just another bear market rally. I have no idea where the market is headed next. So far, in this bear market, the selling has been very much measured. We haven't seen capitulation. Will the market (SPX) crash to $3,000 by year-end? I don't know. What I learned from Micron and Nike's results is that corporate earnings will come under severe pressure in upcoming quarters. Honestly, I think earnings will drive markets going forward because the multiple contraction is more or less complete (except for a few large-cap tech names like Apple, Microsoft, and Tesla (TSLA)). The top 10 S&P 500 companies are trading at 21-22x+ PE, whereas the remaining 490 are already at 13-14x PE.

Author's investment mandates

The Quantamental Investor

This is a very tricky market, but there are tons of incredible opportunities for individual stock investing. In all three of TQI's core portfolios [GARP, Buyback-Dividend, and Moonshot Growth], we are ready with cash (roughly 50% of AUM) if the opportunities improve. Our playbook for this bear market is simple - "Build long positions slowly using DCA plans, and manage risk proactively."

In the "Battle of Safe Havens", cash has been the winner so far; however, surging treasury rates are making treasury bonds a viable alternative to equities. If I had to choose between Apple, Microsoft, and the 2-yr treasury bond, I would go with the 2-yr treasury bond for the medium term.

Key Takeaway: I rate both Apple and Microsoft 'Neutral/Avoid/Hold' at current levels.

Thanks for reading, and happy investing. Please share your thoughts, questions, and/or concerns in the comments section below.

Mon, 10 Oct 2022 01:29:00 -0500 en text/html https://seekingalpha.com/article/4545593-apple-microsoft-treasury-bonds-safe-havens
Killexams : New Movies on Apple TV+

Apple’s streaming service, Apple TV+, has focused more on original series than feature-length films with its subscription model driven by hits like Ted Lasso, Severance, The Morning Show and For All Mankind. But with only a fraction of the original movie releases of its rivals (just 15 at our last count), Apple TV+ was still the first to capture a coveted Best Picture Oscar win with last year’s Coda. And the computer-manufacturer-turned-entertainment-titan has released four new movies so far this year.

Here are the five existing movies to stream for free on Apple TV+:

greatest-beer.jpg Release Date: September 30, 2022
Director: Peter Farrelly
Starring: Zac Efron, Russell Crowe, Jake Picking, Archie Renaux, Kyle Allen, Will Ropp
Rating: R
Runtime: 126 minutes

Watch on Apple TV+

Peter Farrelly is, credit where due, an Oscar-level director, but he’s also an easy mark for fabrications, which is why Green Book is an affront to good taste, and one of The Greatest Beer Run Ever’s central motifs: The truth. John “Chickie” Donohue (Zac Efron) is likewise a total sucker for feel-good bullshit. He buys the American military’s stories about Vietnam and communism and monumental tallies of V.C. ass getting kicked by the U.S. of A. And why not? The propaganda goes down as smoothly as macro-brewed beer. The Greatest Beer Run Ever coolly confronts Chickie with a daunting existential question: What if all those stories are bald lies? No way, says Chickie. Screw those pinko hippies protesting the war out in the streets, defaming America’s troops; he’s so fired up about the disrespect shown to his neighborhood pals fighting abroad, some drafted into service, others encouraged to serve voluntarily, that, after a little casual egging on by his neighborhood pals at home, Chickie decides he’s going to inveigle his way into Vietnam with a duffel full of Pabst, hit up each base where he has friends stationed, and hand ‘em frosty ones as his affable, lunkheaded way of thanking them for their service. An idiotic gesture? Certainly! But is the gesture well received? Not really, no! It’s the Vietnam War. There are no rules, as Walter Sobchack snidely lectures Donny Kerabatsos in The Big Lebowski, and that applies to combat as well as gratitude. Like Green Book, The Greatest Beer Run Ever is a story about one man blithely strolling into others’ lives, and how bearing witness to their travails forces him to reassess his siloed worldview. Chickie is a man out of place and out of his depth. Vietnam’s war-torn horrors should feel colossal. But they feel dutifully staged. Even the emotional beats between Efron and his supporting cast—Jake Picking, Archie Renaux, Kyle Allen and Will Ropp, Chickie’s enlisted pals—don’t have the proper scale. Worse, Farrelly simply uses the movie as a template for laying down commentary ripped from today (about the power and necessity of truth in journalism) over Chickie’s story and Vietnam’s history. In a film seemingly made of lazy choices, this is the laziest, and most craven, of all. Farrelly is too busy making a Big Important Movie instead of making a movie that matters. —Andy Crump

luck.jpg Release Date: August 5, 2022
Director: Peggy Holmes
Stars: Eva Noblezada, Simon Pegg, Jane Fonda, Whoopi Goldberg, Colin O’Donoghue, Lil Rel Howery, Flula Borg, John Ratzenberger, Adelynn Spoon
Rating: PG
Runtime: 115 minutes

Watch on Apple TV+

Eighteen-year-old Sam (Eva Noblezada) has always been unlucky. Her keys fall down a manhole. Her bike has a flat tire. She inadvertently locks herself in the bathroom. Her toast always lands jam side down. But perhaps her biggest misfortune is that she never found her “forever family” and grew up in the Summerland Home for Girls. (The movie kicks the cliché of killing parents off up a notch: Sam never had parents at all!). Sam’s luck changes when she meets talking black cat Bob (Simon Pegg) who accidentally leaves behind a lucky penny. The penny turns Sam’s life around. Suddenly she’s stocking the shelves at her job at Flowers and More with aplomb. Her toaster works perfectly and even lands her toast jam side up. When Sam accidentally flushes the lucky penny down the toilet (what is a kid’s movie without a little toilet humor?), she is desperate to find another one and follows Bob down the secret portal to the Land of Luck. The plot of Luck is far too dense and convoluted. I suspect the movie’s target audience won’t have the patience for it. And that’s too bad. Because inside Kiel Murray’s complex script, there is a positive message: The idea that bad luck is just the mirror image of good luck, and that bad luck teaches you how to adjust and respond to what life brings. That some of Sam’s best experiences and friendships began with bad luck. That perhaps our bad experiences help make us who we are. —Amy Amatangelo

cha-cha-real-smooth.jpg Release Date: June 17, 2022
Director: Cooper Raiff
Stars: Dakota Johnson, Cooper Raiff, Vanessa Burghardt, Evan Assante, Brad Garrett, Leslie Mann
Rating: R
Runtime: 107 minutes

Watch on Apple TV+

Every once in a while you meet someone who’s truly just some guy, with no discernibly extraordinary qualities, for whom things seem to work out based on charisma alone. In writer-director-star Cooper Raiff’s friendly sophomore feature Cha Cha Real Smooth, that guy happens to be Andrew (Raiff), a charming and disarming latest Tulane graduate whose sole motivation is to make enough money to join his Fulbright scholar girlfriend in Barcelona. Unfortunately, the only job he can grab is as a minimum wage cashier at an unforgivingly named food court stand in his hometown while he crashes in his little brother’s room, fights with his pragmatist stepdad (Brad Garrett), and attempts to convince his mom (Leslie Mann) that she has the wrong taste in men and he has the right taste in women. Into this meandering existence stumble the opportunities of his lifetime thus far. While escorting his brother, David (the cute-as-a-button Evan Assante), to a bar mitzvah bash, Andrew takes it upon himself to spice up the floundering dance floor, and to make friends with the resident rumored bad mom, Domino (Dakota Johnson), and her autistic daughter, Lola (natural newcomer Vanessa Burghardt). He succeeds wildly at both, getting hired by a mob of Jewish moms as a party starter for their childrens’ b’nai mitzvot, and securing Domino’s affection in the process. In this indie, as with many before it, nothing is more attractive to a hot mom than a goofy, unfiltered young man-child who treats her own child like royalty. Also in this indie, as with many before it, Judaism is used as a backdrop and as texture, but isn’t engaged with on any level beyond its visual symbolism. But for the runtime of Cha Cha Real Smooth, Raiff’s clever script, affable characters and naturalistic direction makes it painless enough to sympathize with someone who can’t moonwalk, but will nevertheless skate on by. —Shayna Maci Warner

sky-everywhere.jpg Release Date: February 11, 2022
Director: Josephine Decker
Stars: Grace Kaufman, Pico Alexander, Jacques Colimon, Cherry Jones, Jason Segal
Rating: PG-13
Runtime: 103 minutes

Watch on Apple TV+

YA adaptations in film often get an undeservedly bad rap, if only because popular contemporary YA fiction—at least of the sort that tends to garner enough mainstream attention to land feature film or television deals—is known for often being tremendously sad and cloyingly emotional. Ugly crying, for the reader, at least, is the norm. So, consider this your warning going in that The Sky Is Everywhere is an emotional ride, one that frequently skirts the line between sharply truthful and painfully saccharine. (Usually ending up in the realm of the former, but not always.) Yet its whimsical, fairytale feel generally keeps the story from feeling like something you’ve seen before. Adapted from the novel of the same name by author Jandy Nelson, the story centers on Lennie Walker (Grace Kaufman), a teen musical prodigy who’s struggling to figure out how to keep going in the wake of the sudden death of her older sister Bailey (Havana Rose Liu). The two sisters were exceptionally close, and much of Lennie’s plans for her life after high school revolved around the fact that the pair would do them together, from becoming roommates to attending Julliard. To say that Lennie doesn’t know who she is anymore without her sister is an understatement and her sense of self is further rocked throughout the film by the revelation of several key secrets Bailey had been keeping from her. Director Josephine Decker manages to find unexpected and beautiful ways to visually represent Lennie’s emotional state. The Sky Is Everywhere is full of strange and surprising images that include everything from over-the-top riots of color to claustrophobic, almost horror-like darkness. From the faceless rose-people who rise from the ground to form a human-plant hybrid wreath around Lennie and Joe during an important moment to the impromptu dance sequence that breaks out as Lennie reminisces about her sister’s love of music to the forest that suddenly starts raining broken furniture, Baker makes a ton of interesting choices that add color and depth to an otherwise fairly simple and straightforward story. The same can generally be said of The Sky Is Everywhere itself, which puts a fresh spin on an age-old course and brings a much-loved book to life. —Lacy Baugher Milas

the-tragedy-of-macbeth-poster.jpg Release Date: January 14, 2022
Director: Joel Coen
Stars: Denzel Washington, Frances McDormand, Brendan Gleeson, Corey Hawkins, Moses Ingram, Kathryn Hunter, Bertie Carvel, Harry Melling
Rating: R
Runtime: 105 minutes

Watch on Apple TV+

Defined by stark minimalism, Joel Coen’s The Tragedy of Macbeth is an undeniable directorial flex. Coen commands the film’s slickly sparse black-and-white visuals alongside his cast of renowned actors, yielding a final product saturated with artistic determination—but one stripped of any semblance of madness or mania. The highly stylized aesthetic of the film—coupled with regretfully restrained performances—transform Macbeth into an all too tedious tragedy. Though it hardly requires recapitulation, The Tragedy of Macbeth follows the eponymous ruthless Scottish general (Denzel Washington) and his Lady (Frances McDormand) in the wake of a jarring prophecy. Coen’s Macbeth attempts to distinguish itself in comparatively cautious ways: Washington and McDormand occupy roles typically filled by younger actors, while the film’s milky white and dense black contrast enhances the otherwise barren landscape. Macbeth lacks any clear innovative distinction aside from a visually malleable soundstage and long-established actors. The rigid imagery, coupled with drably subdued performances from the film’s leads, demonstrates an inability to capture an overwrought descent into insanity; it is mania preventatively quashed by SSRIs. McDormand’s intent to portray Lady Macbeth as macabrely muted results in a restrictive rigidity. Meanwhile, Washington’s Macbeth is somewhat more convincing in his trepidation, but the role ultimately feels miscast—after all, the text’s succinct nature positions the cunning Scottish King as an unlikable fiend. The Tragedy of Macbeth is nonetheless a well-executed adaptation. The film’s staging and cinematography are clever and compelling; the thespians involved are unequivocally talented; it is competently helmed by one of the most influential directors currently working in Hollywood. Unfortunately, the bar set so high by previous Coen efforts renders all of these successful components moot. Joel Coen’s Macbeth lacks risk, ingenuity and, most importantly, reward. For those who seek a safely satisfying rendition of the lean Shakespearean tragedy, this latest execution will surely suffice. —Natalia Keogan

Tue, 04 Oct 2022 12:00:00 -0500 en text/html https://www.pastemagazine.com/movies/apple-tv-/new-movies-on-apple-tv-plus/?deployment=agilityzone&device=mobile&segments=
Killexams : AI Ethics And AI Law Asking Hard Questions About That New Pledge By Dancing Robot Makers Saying They Will Avert AI Weaponization

You might have perchance last week seen in the news or noticed on social media the announced pledge by some robot makers about their professed aims to avoid AI weaponization of general-purpose robots. I’ll be walking you through the details in a moment, so don’t worry if you hadn’t caught wind of the matter.

The reaction to this proclamation has been swift and, perhaps as usual in our polarized society, been both laudatory and at times mockingly critical or downright nastily skeptical.

It is a tale of two worlds.

In one world, some say that this is exactly what we need for responsible AI robot developers to declare.

Thank goodness for being on the right side of an issue that will gradually be getting more visible and more worrisome. Those cute dancing robots are troubling because it is pretty easy to rejigger them to carry weapons and be used in the worst of ways (you can check this out yourself by going to social media and there are plentifully videos showcasing dancing robots armed with machine guns and other armaments).

The other side of this coin says that the so-called pledge is nothing more than a marketing or public relations ploy (as a side note, is anybody familiar with the difference between a pledge and a donation?). Anyway, the doubters exhort that this is unbridled virtue signaling in the context of dancing robots. You see, bemoaning the fact that general-purpose robots can be weaponized is certainly a worthwhile and earnestly sought consideration, though merely claiming that a maker won’t do so is likely a hollow promise, some insist.

All in all, the entire matter brings up quite a hefty set of AI Ethics and AI Law considerations. We will meticulously unpack the course and see how this is a double-whammy of an ethical and legal AI morass. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

I will also be referring throughout this discussion to my prior analyses of the dangers of AI weaponization, such as my in-depth assessment at the link here. You might want to take a look at that discourse for additional behind-the-scenes details.

The Open Letter That Opens A Can Of Worms

Let’s begin this analysis by doing a careful step-by-step exploration of the Open Letter that was recently published by six relatively well-known advanced robot makers, namely Boston Dynamics, Clearpath Robotics, ANYbotics, Agility Robotics, Open Robotics, and Unitree. By and large, I am guessing that you have seen mainly the Boston Dynamics robots, such as the ones that prance around on all fours. They look as though they are dog-like and we relish seeing them scampering around.

As I’ve previously and repeatedly forewarned, the use of such “dancing” robots as a means of convincing the general public that these robots are cutesy and adorable is sadly misleading and veers into the abundant pitfalls of anthropomorphizing them. We begin to think of these hardened pieces of metal and plastic as though they are the equivalent of a cuddly loyal dog. Our willingness to accept these robots is predicated on a false sense of safety and assurance. Sure, you’ve got to make a buck and the odds of doing so are enhanced by parading around dancing robots, but this regrettably omits or seemingly hides the real fact that these robots are robots and that the AI controlling the robots can be devised wrongfully or go awry.

Consider these ramifications of AI (excerpted from my article on AI weaponization, found at the link here):

  • AI might encounter an error that causes it to go astray
  • AI might be overwhelmed and lockup unresponsively
  • AI might contain developer bugs that cause erratic behavior
  • AI might be corrupted with implanted evildoer virus
  • AI might be taken over by cyberhackers in real-time
  • AI might be considered unpredictable due to complexities
  • AI might computationally make the “wrong” decision (relatively)
  • Etc.

Those are points regarding AI that is of the type that is genuinely devised at the get-go to do the right thing.

On top of those considerations, you have to include AI systems crafted from inception to do bad things. You can have AI that is made for beneficial purposes, often referred to as AI For Good. You can also have AI that is intentionally made for bad purposes, known as AI For Bad. Furthermore, you can have AI For Good that is corrupted or rejiggered into becoming AI For Bad.

By the way, none of this has anything to do with AI becoming sentient, which I mention because some keep exclaiming that today’s AI is either sentient or on the verge of being sentient. Not so. I take apart those myths in my analysis at the link here.

Let’s make sure then that we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here's a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Now that I’ve laid a helpful foundation for getting into the Open Letter, we are ready to dive in.

The official subject title of the Open Letter is this:

  • An Open Letter to the Robotics Industry and our Communities, General Purpose Robots Should Not Be Weaponized” (as per posted online).

So far, so good.

The title almost seems like ice cream and apple pie. How could anyone dispute this as an erstwhile call to avoid AI robot weaponization?

Read on to see.

First, as fodder for consideration, here’s the official opening paragraph of the Open Letter:

  • “We are some of the world’s leading companies dedicated to introducing new generations of advanced mobile robotics to society. These new generations of robots are more accessible, easier to operate, more autonomous, affordable, and adaptable than previous generations, and capable of navigating into locations previously inaccessible to automated or remotely-controlled technologies. We believe that advanced mobile robots will provide great benefit to society as co-workers in industry and companions in our homes” (as per posted online).

The sunny side to the advent of these types of robots is that we can anticipate a lot of great benefits to emerge. No doubt about it. You might have a robot in your home that can do those Jetson-like activities such as cleaning your house, washing your dishes, and other chores around the household. We will have advanced robots for use in factories and manufacturing facilities. Robots can potentially crawl or maneuver into tight spaces such as when a building collapses and human lives are at stake to be saved. And so on.

As an aside, you might find of interest my latest eye-critical coverage of the Tesla AI Day, at which some kind-of walking robots were portrayed by Elon Musk as the future for Tesla and society, see the link here.

Back to the matter at hand. When seriously discussing dancing robots or walking robots, we need to mindfully take into account tradeoffs or the total ROI (Return on Investment) of this use of AI. We should not allow ourselves to become overly enamored by benefits when there are also costs to be considered.

A shiny new toy can have rather sharp edges.

All of this spurs an important but somewhat silent point that part of the reason that the AI weaponization issue arises now is due to AI advancement toward autonomous activity. We have usually expected that weapons are generally human operated. A human makes the decision whether to fire or engage the weapon. We can presumably hold that human accountable for their actions.

AI that is devised to work autonomously or that can be tricked into doing so would seemingly remove the human from the loop. The AI is then algorithmically making computational decisions that can end up killing or harming humans. Besides the obvious concerns about lack of control over the AI, you also have the qualms that we might have an arduous time pinning responsibility as to the actions of the AI. We don’t have a human that is our obvious instigator.

I realize that some believe that we ought to simply and directly hold the AI responsible for its actions, as though AI has attained sentience or otherwise been granted legal personhood (see my coverage on the debates over AI garnering legal personhood at the link here). That isn’t going to work for now. We are going to have to trace the AI to the humans that either devised it or that fielded it. They will undoubtedly try to legally dodge responsibility by trying to contend that the AI went beyond what they had envisioned. This is a growing contention that we need to deal with (see my AI Law writings for insights on the contentious issues involved).

The United Nations (UN) via the Convention on Certain Conventional Weapons (CCW) in Geneva has established eleven non-binding Guiding Principles on Lethal Autonomous Weapons, as per the official report posted online (encompassing references to pertinent International Humanitarian Law or IHL provisos), including:

(a) International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems;

(b) Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system;

(c) Human-machine interaction, which may take various forms and be implemented at various stages of the life cycle of a weapon, should ensure that the potential use of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems is in compliance with applicable international law, in particular IHL. In determining the quality and extent of human-machine interaction, a range of factors should be considered including the operational context, and the characteristics and capabilities of the weapons system as a whole;

(d) Accountability for developing, deploying and using any emerging weapons system in the framework of the CCW must be ensured in accordance with applicable international law, including through the operation of such systems within a responsible chain of human command and control;

(e) In accordance with States’ obligations under international law, in the study, development, acquisition, or adoption of a new weapon, means or method of warfare, determination must be made whether its employment would, in some or all circumstances, be prohibited by international law;

(f) When developing or acquiring new weapons systems based on emerging technologies in the area of lethal autonomous weapons systems, physical security, appropriate non-physical safeguards (including cyber-security against hacking or data spoofing), the risk of acquisition by terrorist groups and the risk of proliferation should be considered;

(g) Risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any weapons systems;

(h) Consideration should be given to the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with IHL and other applicable international legal obligations;

(i) In crafting potential policy measures, emerging technologies in the area of lethal autonomous weapons systems should not be anthropomorphized;

(j) Discussions and any potential policy measures taken within the context of the CCW should not hamper progress in or access to peaceful uses of intelligent autonomous technologies;

(k) The CCW offers an appropriate framework for dealing with the issue of emerging technologies in the area of lethal autonomous weapons systems within the context of the objectives and purposes of the Convention, which seeks to strike a balance between military necessity and humanitarian considerations.

These and other various laws of war and laws of armed conflict, or IHL (International Humanitarian Laws) serve as a vital and ever-promising guide to considering what we might try to do about the advent of autonomous systems that are weaponized, whether by keystone design or by after-the-fact methods.

Some say we should outrightly ban those AI autonomous systems that are weaponizable. That’s right, the world should put its foot down and stridently demand that AI autonomous systems shall never be weaponized. A total ban is to be imposed. End of story. Full stop, period.

Well, we can sincerely wish that a ban on lethal weaponized autonomous systems would be strictly and obediently observed. The problem is that a lot of wiggle room is bound to slyly be found within any of the sincerest of bans. As they say, rules are meant to be broken. You can bet that where things are loosey-goosey, riffraff will ferret out gaps and try to wink-wink their way around the rules.

Here are some potential loopholes worthy of consideration:

  • Claims of Non-Lethal. Make non-lethal autonomous weapons systems (seemingly okay since it is outside of the ban boundary), which you can then on a dime shift into becoming lethal (you’ll only be beyond the ban at the last minute).
  • Claims of Autonomous System Only. Uphold the ban by not making lethal-focused autonomous systems, meanwhile, be making as much progress on devising everyday autonomous systems that aren’t (yet) weaponized but that you can on a dime retrofit into being weaponized.
  • Claims of Not Integrated As One. Craft autonomous systems that are not at all weaponized, and when the time comes, piggyback weaponization such that you can attempt to vehemently argue that they are two separate elements and therefore contend that they do not fall within the rubric of an all-in-one autonomous weapon system or its cousin.
  • Claims That It Is Not Autonomous. Make a weapon system that does not seem to be of autonomous capacities. Leave room in this presumably non-autonomous system for the dropping in of AI-based autonomy. When needed, plug in the autonomy and you are ready to roll (until then, seemingly you were not violating the ban).
  • Other

There are plenty of other expressed difficulties with trying to outright ban lethal autonomous weapons systems. I’ll cover a few more of them.

Some pundits argue that a ban is not especially useful and instead there should be regulatory provisions. The idea is that these contraptions will be allowed but stridently policed. A litany of lawful uses is laid out, along with lawful ways of targeting, lawful types of capabilities, lawful proportionality, and the like.

In their view, a straight-out ban is like putting your head in the sand and pretending that the elephant in the room doesn’t exist. This contention though gets the blood boiling of those that counter with the argument that by instituting a ban you are able to dramatically reduce the otherwise temptation to pursue these kinds of systems. Sure, some will flaunt the ban, but at least hopefully most will not. You can then focus your attention on the flaunters and not have to splinter your attention to everyone.

Round and round these debates go.

Another oft-noted concern is that even if the good abide by the ban, the bad will not. This puts the good in a lousy posture. The bad will have these kinds of weaponized autonomous systems and the good won’t. Once things are revealed that the bad have them, it will be too late for the good to catch up. In short, the only astute thing to do is to prepare to fight fire with fire.

There is also the classic deterrence contention. If the good opt to make weaponized autonomous systems, this can be used to deter the bad from seeking to get into a tussle. Either the good will be better armed and thusly dissuade the bad, or the good will be ready when the bad perhaps unveils that they have surreptitiously been devising those systems all along.

A counter to these counters is that by making weaponized autonomous systems, you are waging an arms race. The other side will seek to have the same. Even if they are technologically unable to create such systems anew, they will now be able to steal the plans of the “good” ones, reverse engineer the high-tech guts, or mimic whatever they seem to see as a tried-and-true way to get the job done.

Aha, some retort, all of this might lead to a reduction in conflicts by a semblance of mutual. If side A knows that side B has those lethal autonomous systems weapons, and side B knows that side A has them, they might sit tight and not come to blows. This has that distinct aura of mutually assured destruction (MAD) vibes.

And so on.

Looking Closely At The Second Paragraph

We have already covered a lot of ground herein and only so far considered the first or opening paragraph of the Open Letter (there are four paragraphs in total).

Time to take a look at the second paragraph, here you go:

  • “As with any new technology offering new capabilities, the emergence of advanced mobile robots offers the possibility of misuse. Untrustworthy people could use them to invade civil rights or to threaten, harm, or intimidate others. One area of particular concern is weaponization. We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society. For these reasons, we do not support the weaponization of our advanced-mobility general-purpose robots. For those of us who have spoken on this issue in the past, and those engaging for the first time, we now feel renewed urgency in light of the increasing public concern in latest months caused by a small number of people who have visibly publicized their makeshift efforts to weaponize commercially available robots” (as per posted online).

Upon reading that second paragraph, I hope you can see how my earlier discourse herein on AI weaponization comes to the fore.

Let’s examine a few additional points.

One somewhat of a qualm about a particular wording aspect that has gotten the dander up by some is that the narrative seems to emphasize that “untrustworthy people” could misuse these AI robots. Yes, indeed, it could be bad people or evildoers that bring about dastardly acts that will “misuse” AI robots.

At the same time, as pointed out toward the start of this discussion, we need to also make clear that the AI itself could go awry, possibly due to embedded bugs or errors and other such complications. The expressed concern is that only emphasizing the chances of untrustworthy people is that it seems to ignore other adverse possibilities. Though most AI companies and vendors are loath to admit it, there is a plethora of AI systems issues that can undercut the safety and reliability of autonomous systems. For my coverage of AI safety and the need for rigorous and provable safeguards, see the link here, for example.

Another notable point that has come up amongst those that have examined the Open Letter entails the included assertion that there could end up undercutting public trust associated with AI robots.

On the one hand, this is a valid assertion. If AI robots are used to do evil bidding, you can bet that the public will get quite steamed. When the public gets steamed, you can bet that lawmakers will jump into the foray and seek to enact laws that clamp down on AI robots and AI robotic makers. This in turn could cripple the AI robotics industry if the laws are all-encompassing and shut down efforts involving AI robotic benefits. In a sense, the baby could get thrown out with the bathwater (an old expression, probably deserving to be retired).

The obvious question brought up too is whether this assertion about averting a reduction in public trust for AI robots is a somewhat self-serving credo or whether it is for the good of us all (can it be both?).

You decide.

We now come to the especially meaty part of the Open Letter:

  • “We pledge that we will not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so. When possible, we will carefully review our customers’ intended applications to avoid potential weaponization. We also pledge to explore the development of technological features that could mitigate or reduce these risks. To be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws” (as per posted online).

We can unpack this.

Sit down and prepare yourself accordingly.

Are you ready for some fiery polarization?

On the favorable side, some are vocally heralding that these AI robot makers would make such a pledge. It seems that these robot makers will thankfully seek to not weaponize their “advanced-mobility general-purpose” robots. In addition, the Open Letter says that they will not support others that do so.

Critics wonder whether there is some clever wordsmithing going on.

For example, where does “advanced-mobility” start and end? If a robot maker is devising a simple-mobility AI robot rather than an advanced one (which is an undefined piece of techie jargon), does that get excluded from the scope of what will not be weaponized? Thus, apparently, it is okay to weaponize simple-mobility AI robots, as long as they aren’t so-called advanced.

The same goes for the phrasing of general-purpose robots. If an AI robot is devised specifically for weaponization and therefore is not shall we say a general-purpose robot, does that become a viable exclusion from the scope?

You might quibble with these quibbles and fervently argue that this is just an Open Letter and not a fifty-page legal document that spells out every nook and cranny.

This brings us to the seemingly more macro-level qualm expressed by some. In essence, what does a “pledge” denote?

Some ask, where’s the beef?

A company that makes a pledge like this is seemingly doing so without any true stake in the game. If the top brass of any firm that signs up for this pledge decides to no longer honor the pledge, what happens to that firm? Will the executives get summarily canned? Will the company close down and profusely apologize for having violated the pledge? And so on.

As far as can be inferred, there is no particular penalty or penalization for any violation of the pledge.

You might argue that there is a possibility of reputational damage. A pledging firm might be dinged in the marketplace for having made a pledge that it no longer observed. Of course, this also assumes that people will remember that the pledge was made. It also assumes that the violation of the pledge will be somehow detected (it distinctly seems unlikely a firm will tell all if it does so). The pledge violator would have to be called out and yet such an issue might become mere noise in the ongoing tsunami of news about AI robotics makers.

Consider another angle that has come up.

A pledging firm gets bought up by some larger firm. The larger firm opts to start turning the advanced-mobility general-purpose robots into AI weaponized versions.

Is this a violation of the pledge?

The larger firm might insist that it is not a violation since they (the larger firm) never made the pledge. Meanwhile, the innocuous AI robots that the smaller firm has put together and devised, doing so with seemingly the most altruistic of intentions, get nearly overnight revamped into being weaponized.

Kind of undermines the pledge, though you might say that the smaller firm didn’t know that this would someday happen. They were earnest in their desire. It was out of their control as to what the larger buying firm opted to do.

Some also ask whether there is any legal liability in this.

A pledging firm decides a few months from now that it is not going to honor the pledge. They have had a change of heart. Can the firm be sued for having abandoned the pledge that it made? Who would sue? What would be the basis for the lawsuit? A slew of legal issues arise. As they say, you can pretty much sue just about anybody, but whether you will prevail is a different matter altogether.

Think of this another way. A pledging firm gets an opportunity to make a really big deal to sell a whole bunch of its advanced-mobility general-purpose robots to a massive company that is willing to pay through the nose to get the robots. It is one of those once-in-a-lifetime zillion-dollar purchase deals.

What should the AI robotics company do?

If the AI robotics pledging firm is publicly traded, they would almost certainly aim to make the sale (the same could be said of a privately held firm, though not quite so). Imagine that the pledging firm is thinking that the buyer might try to weaponize the robots, though let’s say there isn’t any such discussion on the table. It is just rumored that the buyer might do so.

Accordingly, the pledging firm puts into their licensing that the robots aren’t to be weaponized. The buyer balks at this language and steps away from the purchase.

How much profit did the pledging AI robotics firm just walk away from?

Is there a point at which the in-hand profit outweighs the inclusion of licensing restriction requirement (or, perhaps legally wording the restriction to allow for wiggle room and still make the deal happen)? I think that you can see the quandary involved. Tons of such scenarios are easily conjured up. The question is whether this pledge is going to have teeth. If so, what kind of teeth?

In short, as mentioned at the start of this discussion, some are amped up that this type of pledge is being made, while others are taking a dimmer view of whether the pledge will hold water.

We move on.

Getting A Pledge Going

The fourth and final paragraph of the Open Letter says this:

  • “We understand that our commitment alone is not enough to fully address these risks, and therefore we call on policymakers to work with us to promote safe use of these robots and to prohibit their misuse. We also call on every organization, developer, researcher, and user in the robotics community to make similar pledges not to build, authorize, support, or enable the attachment of weaponry to such robots. We are convinced that the benefits for humanity of these technologies strongly outweigh the risk of misuse, and we are excited about a bright future in which humans and robots work side by side to tackle some of the world’s challenges” (as per posted online).

This last portion of the Open Letter has several additional elements that have raised ire.

Calling upon policymakers can be well-advised or ill-advised, some assert. You might get policymakers that aren’t versed in these matters that then do the classic rush-to-judgment and craft laws and regulations that usurp the progress on AI robots. Per the point earlier made, perhaps the innovation that is pushing forward on AI robotic advances will get disrupted or stomped on.

Better be sure that you know what you are asking for, the critics say.

Of course, the counter-argument is that the narrative clearly states that policymakers should be working with AI robotics firms to figure out how to presumably sensibly make such laws and regulations. The counter to the counter-argument is that the policymakers might be seen as beholding to the AI robotics makers if they cater to their whims. The counter to the counter of counter-argument is that it is naturally a necessity to work with those that know about the technology, or else the outcome is going to potentially be a kilter. Etc.

On a perhaps quibbling basis, some have had heartburn over the line that calls upon everyone to make similar pledges as to not attaching weaponry to advanced-mobility general-purpose robots. The keyword there is the word attaching. If someone is making an AI robot that incorporates or seamlessly embeds weaponry, that seems to get around the wording of attaching something. You can see it now, someone vehemently arguing that the weapon is not attached, it is completely part and parcel of the AI robot. Get over it, they exclaim, we aren’t within the scope of that pledge, and they could even have otherwise said that they were.

This brings up another complaint about the lack of stickiness of the pledge.

Can a firm or anyone at all that opts to make this pledge declare themselves unpledged at any time that they wish to do so and for whatever reason they so desire?

Apparently so.

There is a lot of bandying around about making pledges and what traction they imbue.


Yikes, you might say, these companies that are trying to do the right thing are getting drummed for trying to do the right thing.

What has come of our world?

Anyone that makes such a pledge ought to be given the benefit of the doubt, you might passionately maintain. They are stepping out into the public sphere to make a bold and vital contribution. If we start besmirching them for doing so, it will assuredly make matters worse. No one will want to make such a pledge. Firms and others won’t even try. They will hide away and not forewarn society about what those darling dancing robots can be perilously turned into.

Skeptics proclaim that the way to get society to wise up entails other actions, such as dropping the fanciful act of showcasing the frolicking dancing AI robots. Or at least make it a more balanced act. For example, rather than solely mimicking beloved pet-loyal dogs, illustrate how the dancing robots can be more akin to wild unleashed angry wolves that can tear humans into shreds with nary a hesitation.

That will get more attention than pledges, they implore.

Pledges can indubitably be quite a conundrum.

As Mahatma Gandhi eloquently stated: “No matter how explicit the pledge, people will turn and twist the text to suit their own purpose.”

Perhaps to conclude herein on an uplifting note, Thomas Jefferson said this about pledges: “We mutually pledge to each other our lives, our fortunes, and our sacred honor.”

When it comes to AI robots, their autonomy, their weaponization, and the like, we are all ultimately going to be in this together. Our mutual pledge needs at least to be that we will keep these matters at the forefront, we will strive to find ways to cope with these advances, and somehow find our way toward securing our honor, our fortunes, and our lives.

Can we pledge to that?

I hope so.

Sun, 09 Oct 2022 00:00:00 -0500 Lance Eliot en text/html https://www.forbes.com/sites/lanceeliot/2022/10/09/ai-ethics-and-ai-law-asking-hard-questions-about-that-new-pledge-by-dancing-robot-makers-saying-they-will-avert-ai-weaponization/
Killexams : Journey of This Hackathon-loving Tech Leader at Walmart

With over 15 years of experience in software engineering, Sandeep Shekhawat, director of engineering at Walmart Global Tech, USA, has worked across companies – Meta, WhatsApp, Apple, Yahoo! and Hewlett Packard – advising and developing IoT-based retail tech solutions and native cloud-based systems. 

Analytics India Magazine interacted with Shekhawat, diving into his journey as a technical leader. 

Shekhawat currently leads the associate productivity teams responsible for putting smarter apps and technologies in the hands of Walmart associates for smoother store operations using digital transformation. With the deployment of technology such as IoT and AI, his role is to bring efficiency to the firm’s operations. 

“The role of technology is critical in everything we do at the store. Right from the team checking in to the store, seeing their schedules for the tasks assigned to more sophisticated apps that identify where pricing is missing. We use AI-based technology to identify where items are missing and replenish them. Running a store is a complex operation and doing it at Walmart scale is challenging. By using tech, we want to Excellerate our associates’ productivity in the store and meet them where they are.”

The engineer feels that having a great collaboration between product and design is critical for success, for strategy, ideation and goal setting. “To keep my calendar sane, I do follow the rule of going through my meetings the week before to move out meetings that could be done offline or over email. Second, is to ensure that I am respecting my engineers’ focus time.”

Deep dive into the AI Journey

Shekhawat says that AI/ML piqued his interest when he enrolled in the course Empirical Methods in Machine Learning and Data Mining at Cornell University. “I was exposed to neural networks then and the one thing that stood out was how technology had the potential to mimic human neurons. Over the first few years of my professional journey, I took on smaller ML projects. Finally, I got an opportunity to work on an enterprise-grade learning platform at Apple.” 

Shekhawat states that the platform had millions of users with both enterprise and customers. “I was able to design and implement the content recommendation engine based on users past usage, content importance and other factors. I continued my work in the AI/ML area and worked on developing models that were deployed in stores to help identify store layout and inaccuracies in it by processing store images taken from surveys or store-submitted pictures.” Ever since, Shekhawat has continued his interests in this domain area by trying to find opportunities to integrate AI as required in the company’s projects.

Shekhawat shares his insights on solving an AI problem. “My approach is very similar to any other problem that we solve. Firstly, I try to understand the problem. Second is the goals we have set for ourselves. If we do not have specific goals, we might be chasing an outcome which we cannot accomplish. So clearly defined goals are very critical. Lastly, with ML problems quality training data is huge. Sometimes we spend months just collecting, cleaning and prepping our training data before we even write our first line of the ML model.”

During the pandemic, most tech companies were much better prepared to handle the work environment change as compared to other industries, Shekhawat feels. 

Sharing a few leadership learnings from his experience in the tech industry, he said, “As a leader you always have to listen to your teams. Leaders can do that effectively by trying to adopt a bottom-to-top culture in your teams. By bottom-to-top culture, I mean prioritise work and projects that the team likes to work on, which aligns with the team’s goals. This helps engineers with picking projects that they like the most and keeps them motivated. Overall I think the best way to be successful is by investing in your engineers’ growth by identifying the right skillset and finding projects that will help further their growth.”

Contributions to the technological innovations

Shekhawat has been at hackathons both as a participant and as a judge. He has also been mentoring a few coding bootcamps and helping the community of next-gen engineers be ready for software tech. He is also an active contributor on IoT and has been a panel member at conferences. 

“I still feel the excitement of churning out codes and making things work in a collaborative and hacky way. I love the enthusiasm people bring to hackathons and conferences. A key to running a successful hackathon is clearly articulating the idea and theme, providing support (through experts) to newcomers and providing the right data set (if ML related) to the participants so that the ideas are successful.”

Sandeep Shekhawat, director of engineering at Walmart Global Tech

He adds that the journey of being a tech leader wasn’t smooth. “I have been fortunate to have not encountered the ‘glass ceiling’ in my career. Like any technical leader, I have had to reinvent myself a few times to get to where I am.  I started off as a database engineer because that’s what I liked. However the opportunities in my company were more suited for Java and backend development so I pivoted. As mobile applications took off, I had to adjust and understand the mobile development world. Finally as you grow in the tech ladder you need to become good at asking the right questions and going deep to make sure the team is on the right path. You should be able to see a design, plan, or architecture and get to the core issue quickly.”

At Apple, he was responsible for deploying IoT based digital pricing solutions in all of Apple retail and channel stores worldwide. “One of the highlights of my time at Apple was that as an engineering manager, I got to be part of the first launch of Apple Watch and developed a few key apps for the same. I was also a new EM back then so forming my team and recruiting new talent was a big challenge. The launching of integrated affordability payment solutions in Apple’s channel stores helped customers purchase iPhones in India by splitting payments as an EMI without overwhelming paperwork.”

Shekhawat says that he only sees that the adoption of AI/ML will continue to grow throughout different sectors. “Be it healthcare, self driving, consumer industry or tech industry, AI has a huge role to play. In some areas adoption and advancement would be much faster than others. There is a lot of work being done in the automation world and I feel that that’s the right industry for disruption through AI/ML.”

Sun, 09 Oct 2022 23:30:00 -0500 en-US text/html https://analyticsindiamag.com/journey-of-this-hackathon-loving-tech-leader-at-walmart/
Killexams : I binge-listened to tech podcasts for a week, and what I learned about Silicon Valley is kind of scary

Recently I volunteered to listen to as many podcasts about tech investing and venture capital as my soul could handle. Which was stupid of me, because there are so many of them.

Andreessen Horowitz, the famed venture firm, produces a basic-cable channel's worth of programming. The big podcast networks, public-radio hosts, former public-radio hosts turned venture capitalists, venture capitalists with a lot of free time on their hands — every millionaire and billionaire within shouting distance of San Jose, it feels like, is podding. Or casting. Maybe this is the singularity they keep promising us.

That critical mass warrants a critical response. I wanted to hear what these shows have to say about the VC mindset. They're designed to let tech investors and founders control their own narratives, free of annoying questions from journalists like me, but they also promise a kind of education — in investing, in entrepreneurship and innovation, in business. So with the help of some expert colleagues, I put together a list of about a half dozen of the most popular and influential podcasts by tech investors — from "How I Built This" and "The Pomp Podcast" to "Acquired," "All-In," "The Twenty Minute VC," and "This Week in Startups" — and put my ears toward figuring them out.

All told, I poured something like 40 hours of podcasts into my auditory cortex. I leaped around somewhat whimsically, and I did my best to ignore the uneven production values, which rendered a few episodes almost unlistenable — and this was at 1x speed. I also tried to finish episodes rather than noping out, even when they made me squawk or swear loud enough to annoy my work-from-home office-sharer. (That happened not infrequently.) 

Well, I definitely learned some learnings. And they weren't all pretty.

Learning No. 1: The hosts know their businesses

At their best, the podcasts offer some interesting insights into how tech businesses run. I'm not sure anyone should contribute to a venture fund with a lead partner who has enough spare time to make five podcasts a week. But that said, the hosts on many of these shows are investors. They're experts.

On "The A16z Podcast" (named for Andreessen Horowitz), Michael Dell, the founder of Dell Technologies, lit into the investor Carl Icahn for lying about his effort to seize control of the company. On "How I Built This," the founder of the online game platform Roblox told charming stories about making educational software for Apple's then-new Macintosh in the 1980s.

On "This Week in Startups," Jason Calacanis explained to his cohost Molly Wood that some venture-investment contracts feature a provision known as a liquidation preference, which enables the VC investors to profit even when a startup they've backed goes bust. Wood immediately put her finger on how such provisions undercut the self-image of venture capitalists as heroic risk-takers. "So this was just some magical thing VCs started writing into contracts that said: We know our jobs are risky, but we don't want them to be risky?" Yup.

I didn't hear any tips for Startups 101, like getting a meeting with an investor or building a pitch deck or a strategy memo. But I did learn a bunch of stuff I'm unlikely ever to have the opportunity to operationalize, like how to structure a marketing division, or how to think about assembling an investment portfolio as a VC. The technical descriptions from the A16z portfolio — deep unpackings of things like the cryptocurrency ethereum's move from proof-of-work to proof-of-stake, say, or TikTok's algorithmic genius — sometimes went over my head, but I can imagine their utility to engineers and founders.

Learning No. 2: The hosts don't know what they don't know

The problem is, VC podcasts don't stick to the core issues of venture capital. When they attempt to address the wider world beyond their area of expertise, things get weird.

On one episode of "This Week in Startups," explaining his philosophy of direct-to-consumer investing, Calacanis extolled the virtues of a company that makes gummy-candy vitamins, describing it as a "game changer" for getting kids to take them. Which may be true, except that healthy children rarely need supplemental vitamins.

Another example: The "All-In" podcast is designed as a friendly chat among Calacanis and his fellow investors Chamath Palihapitiya, David Sacks, and David Friedberg — "industry veterans, degenerate gamblers, and besties," as the show's ad copy puts it. At one point, Palihapitiya talked about one of his latest investments, a nutraceutical company making some sort of supplement to support healthy gut bacteria. The besties responded with a bunch of funny poop jokes, which I appreciated. But I was surprised at how little they seemed to know about nutritional research. Though the gut microbiome has been implicated in some specific disorders, like certain infections, the hosts acted as if the science of how the myriad species of bacteria in everyone's tummies are directly connected to our mental and physical health was a done deal, ready for market. It's very much not.

Or take the episode of "The Pomp Podcast" that featured Layah Heilpern, an "author, content creator, and speaker." Heilpern, who sounded like a nice person, argued that all young women wanted to be in relationships with powerful, rich men. "I'm single, and I want to get married and I want to find a masculine man," she said. Also, Donald Trump was right about being allowed to grab women by their genitals. Why? Because he was famous.

Anthony Pompliano, host of "The Pomp Podcast," interviewed an influencer who wants to marry a "masculine man."
Harry Murphy/Getty Images
To his credit, the show's host, Anthony "Pomp" Pompliano, asked whether Heilpern thought that was a good thing. She said it was good that free speech allowed Trump to say it. No, Pomp said, not is it OK to say, but is it OK for society to be like that?

I didn't hear her response, because that was the moment I reached above my head, pulled the ejection handles, rocketed out of my fighter jet, slammed into the canopy, and died.

Learning No. 3: The hosts want us to believe what they don't know

There's a shocking amount of this kind of drivel on the tech podcasts. That's partially because the hosts fill time with whatever pops into their heads about what they read on their favorite news app that morning. But it's also because they are consciously using the shows as platforms to spread not just entrepreneurial insights, but the ideology of Silicon Valley. They made it crazy rich in whatever startup they founded or invested in, so now they think they're experts in how the world should be run. They're not just telling us how to invest. They're telling us how to think. 

On an episode of "All-In" that touched on President Joe Biden's plan to forgive more than $300 million in student-loan debt, Palihapitiya argued that it would've been better to let the free market sort everything out through existing bankruptcy protections. That's a common trope on VC podcasts, championing free enterprise over government regulation. But then one of the show's other "besties" — with four people talking, I sometimes lost track of who was saying what — went even further. Democrats want to forgive college debt, he declared, because people who go to college are far more likely to become Democrats. Forgiving student loans is part of a liberal conspiracy to support what he called "woke madrassas" that brainwash teenagers into becoming cancel-culture social-justice warriors.

Dropping out of college is a recurring theme on these podcasts. It's part of the mythos of the heroic tech founder — ditching the University of Squaresville to make billions thinking outside the box. Actually, the über-investor Marc Andreessen had the hotter take here. On "The "A16z Podcast," Michael Dell explained that he left college because the business he was running from his dorm room was making $80,000 a month, and Andreessen suggested that all the anti-college talk in Silicon Valley was just survivorship bias. No one talks about the would-be founders who quit college and fail, he pointed out. And the best-known dropouts, such as Dell, Mark Zuckerberg, and Bill Gates, already had moneymaking businesses when they quit.

Andreessen's comment made me realize that a lot of what I was hearing on these podcasts was itself a form of survivorship bias. All these profiles of companies and interviews with founders imply that even a dummy like me could learn the secrets of engineering a unicorn or disrupting a market. But that's because the podcasts, by their nature, focus almost exclusively on success stories. We're not hearing about the failures except as narrative beats in larger stories of triumph.

Learning No. 4: Rich is more important than good

When journalists do one-on-one interviews, they're supposed to deploy tough questions. The hosts on tech podcasts rarely do that. One exception is Guy Raz, the host of "How I Built This," who clearly does a deep research dive for his hours-long interviews. His episode with David Baszucki, the founder of Roblox, explored decades of tech culture and innovation through the lens of Baszucki's career. This is what a good tech podcast should do: Use access to the best and most successful investors and innovators to illuminate the way Silicon Valley works.

But VC podcasts almost never ask the central question involved in their work: What is the definition of "success"? If success is just something that is popular and makes money, which is the overall vibe on these podcasts, then Roblox is an unarguable success. But if success is something that contributes to societal good — something that meets Silicon Valley's self-professed value of making the world a better place — then maybe Raz should have pushed Baszucki about Roblox's issues with encouraging children to spend money online and serving as a playground for actual real, live Nazis.

Rivian, the electric-truck maker, made a big splash a few years ago with its Tesla-competing high-end SUV and pickup. So props to RJ Scaringe, Rivian's founder, for telling Raz that he used to lie awake at night wondering whether his startup deserved to exist — that he wanted to have a company that did good for the world. He was the only founder I heard explicitly frame that as a metric for success, alongside revenue and growth. (Over on "This Week in Startups," to be fair, Wood explicitly invests in tech to help fix the climate.)

Guy Raz, host of "How I Built This," does deep research dives for his interviews.
Nikola Gell/Getty Images

But as good as Raz is, he didn't push Scaringe very hard on whether his obviously well-intentioned company's goals even made sense. Are large electric trucks really the right way to address road congestion, highway deaths, and the climate crisis? By almost any measure, they are not. But that's not what matters in the world of tech podcasts. For all of Silicon Valley's self-mythologizing around doing good in the world, the only thing that founders and investors seem to care about on these shows is doing good for themselves.

Learning No. 5: The secret of successful startups is there is no secret

In his introductions, Raz is careful to lay out one or two core lessons from his interviews. The vibe is, "Here's what we can learn about business from these titans of industry." But I'm not confident that the lessons Raz lays out are the things that actually made these companies work. Raz prefaced his interview with the two founders of the Bored Ape Yacht Club — valuation: $4 billion! — by explaining that neither Wylie Aronow nor Greg Solano were technical at all. They were creative-writing majors who got interested in crypto and recruited a couple of other pals to handle the back end. It was their genius for storytelling, Raz seemed to suggest, not coding, that made their idea soar.

Both Aronow and Solano had thoughts about why personality-heavy pictures of cartoon apes might turn into a multibillion-dollar brand without an genuine product. "You get to Disney-levels of empathy and belovedness for a brand much sooner when somebody can own a little piece of Mickey Mouse," Solano said. But no matter how hard Raz pushed — and he tried — he couldn't get either man to articulate what their creative insight was. They didn't seem to have any idea about why they'd been successful when so many other NFTs had failed. The main lesson of their success, if there was one, was "be lucky."

The same thing happened when Raz interviewed Sarah Kauss, the founder of the S'well brand of insulated water bottles. At the top, Raz proposed that her signal insight had been bringing sophisticated design to, well, thermoses. Which, as an explanation for tech-industry success, is a bit off message. It's hard to see how "make the bottle elegant" reflects any of the elements that investors on these shows say they look for: disruptive innovation, intrinsic intellectual property, a business with a moat around it that keeps major players from stealing it. The bottles are just … nice. They got popular. S'well became a monster brand. In short: business as usual.

Learning No. 6: The valorization of the asshole

On that "A16z" podcast with Michael Dell, the hosts — one of whom had actually worked with Dell — spent quite a bit of time talking about his reputation as being a generally nice guy. Dell himself talked about firing a relatively high-level person for not being a team player. But he also said that to succeed in business, "You have to be a deviant and mischievous and a rule breaker, and that's not for everyone."

The VC legend Marc Andreessen appears on his firm's podcasts — and sometimes cuts through the b.s.
Justin Sullivan/Getty Images
So, fine, that's not necessarily a will to power. But lots of these shows have a creepy, almost Randian undercurrent that suggests that only a certain kind of person can really build a multizillion-dollar business — someone of unusually vast creativity and intellect. On another "A16z podcast" episode, a veteran of Amazon, Hulu, and Oculus named Eugene Wei made a revealing comment: "The truth is, most people can't originate ideas," he said. That's a striking opinion to have about one's fellow humans, as if everyone else is a non-player character.

One episode of "This Week in Startups" opened with an incredulous discussion of retiring Disney CEO Bob Iger's decision to go work for a venture-capital firm owned by members of the Kushner family. Wood and Calacanis agreed on how much they admired Iger for his brains and "high EQ." He's famously smooth. But then Calacanis weighed in on how Silicon Valley thinks about niceness. Lots of investors, he said, view the founders they work with on a two-by-two grid: likable/unlikable on one axis, and high aptitude/low aptitude on the other. The most sought-after founders, he explained, are in the unlikable/high aptitude quadrant, because they make the most money. 

Assholery, in the Silicon Valley mindset, is a key to success. It's also … bad? If I sound naive in suggesting it's better for people to be kind to one another, try it this way: When bosses act like asses, they prevent those who work for them from achieving their potential, and maybe making their numbers. Anyone who treats their fellow humans the way Steve Jobs did is ignoring all the gentle successes where people got rich and made good stuff. Valorizing assholes is just another form of survivorship bias. You think jerks are good for business because jerks are all you've known.

Learning No. 7: Bigger crowds don't always mean more wisdom

The tech-investment podcast universe feels like an invasive lily pad taking over and choking what was, admittedly, already a swamp. As the internet has taught us, high-quality information is usually behind a paywall. Tech podcasts are free. And long. "All-In" has nearly 100 episodes, all longer than an hour. "Acquired" is in season 11; its current episode on Amazon is more than four hours long. "The Pomp Podcast" routinely breaks the two-hour barrier and has posted more than 1,000 episodes. 

This is a vast amount of content, a Mjolnir-strength whack to the information ecosystem. And it's the sheer scale of the podcasts, ironically, that makes them so dumb. The morning-chat-show structure forces the hosts to talk about whatever is in the news, pushing them to apply the rules of their narrow lanes across the entire information superhighway. The overly friendly interview structure provides no countervailing information and few hard questions. So the basic ideology of Silicon Valley — money is the only metric, being a jerk is fine, don't know what you don't need to — gets pushed into every corner like grout. The philosophy drowns out, talks over, and interrupts all the other ones. With repetition and reinforcement, it starts to feel true.

Here's how I know: It worked on me. At about hour 20, I was starting to question whether I would, in fact, be man enough to start my own business instead of being a mere soyboy cog. By hour 30 I was clicking through that site that lets people invest their retirement funds in crypto and asking my partner whether it might make sense to throw a small, risk-sensible percentage of our 401(k) at it. I could feel my intuitive map getting rewritten. Government is always bad, the market solves everything, clever investment is a real job even if you don't produce anything except money. I should start a company, grow fast, get a big exit, buy a house in Atherton and spend my leisure time urging people not to go to college and fighting the construction of protected bike lanes and multifamily housing.

I shook off the impulse; I remain an un-entrepreneurial swiller of lattes, lacking the risk-taking bravado (and large vacation home) of a founder. But after 40 hours of listening to tech podcasts, I feel kind of bad about it.

Adam Rogers is a senior correspondent at Insider.

Mon, 03 Oct 2022 12:00:00 -0500 en-US text/html https://www.businessinsider.com/tech-podcasts-silicon-valley-ideology-andreessen-horowitz-calacanis-palihapitiya-2022-9
Killexams : The 10 Best Movies on Apple TV+, Ranked (October 2022)

Apple TV+ is one of the strangest streamers out there, with almost no licensed TV or film content and a small number of originals. Apple is clearly taking a “quality over quantity” approach, with its money spread across genres and targeted at making its subscribers (many roped in with a deal that came with one of the company’s tech products) treat it like a real contender. It also helps that it’s only $4.99 a month, or free for a year if you’ve just purchased a new (and eligible) device.

With films from up-and-comers like Minhal Baig, arthouse favorites like Sofia Coppola and Werner Herzog, some A-list music docs and one of the best animated movies of the 2020s, Apple TV+ is actually making the case that it belongs in the conversation alongside the more established services. As long as it keeps adding good movies to its roster, that is. It recently snagged a few critical darlings like CODA and The Velvet Underground.

Here’re our ranked picks for the best ten movies on Apple TV+ right now. You can also find our ranking of the best Apple TV+ original series.


Watch on Apple TV+

“[The Velvet Underground] had entropy within it,” one of the many talking heads featured in Todd Haynes’ documentary reflects, chewing on the ultimate fate of the band at the center of it all towards the end of The Velvet Underground. It’s true that the avant-garde artists Haynes details in his first doc were more a single moment in time that rippled outward, a doomed endeavor not meant to last in the most immediately tangible way. Lou Reed and his ragtag team of black-clad counterculture musicians were a single thread within the vast, wide-spanning fabric of 1960s New York City, rubbing shoulders with artists, writers and musicians, and leaving a mark that would see their influence last long after the band’s members had already parted ways. In this respect, Haynes (who may be new to documentary but, with Velvet Goldmine and I’m Not There, is no stranger to music movies) aptly paints a portrait of The Velvet Underground, albeit not with people unfamiliar with the band in mind. He never spends too much time in the past that led to their artistic zenith or the legacy that it would leave behind, or even allows much space for true linear comprehension of the band at all. Through a rhythm which may feel inaccessible to more casual listeners, Haynes nonetheless effectively reckons with the moment that the band entered the world and the moment that they vacated.—Brianna Zigler


Watch on Apple TV+

Filmed over the course of four years, The Elephant Queen follows revered matriarch Athena and the herd she shepherds across the unforgiving terrain in search of food and water. Like any nature doc worth its salt, the film is a gorgeous visual journey through what have come to be perilous times for the world’s charismatic megafauna, something never made explicit in the script narrated by a staid Chiwetel Ejiofor. Unsure whether it wants to be more Planet Earth or pure Disney fare, The Elephant Queen’s message is mixed as it chronicles Athena’s long journey. Early on, Alex Heffes’ whimsical score delights alongside footage of creatures found “a toenail height” to the elephants, including a particularly frightened frog whose pond the herd start stomping around in. But there is also an extremely difficult sequence not too much later that more coolly details the death of the herd’s youngest member from starvation. The Elephant Queen is messy, but it’s still a worthwhile nature watch that educates viewers on how important elephants are to the biomes across which they traverse and why. The documentary struggles to narratively incorporate a gaggle other creatures encountered throughout, be they avian or amphibian, although just meeting these beings and learning a little more about their own life cycles is justification enough for their inclusion. There is a rawness and a beauty to the production that should be appreciated even through some of its more questionable choices. Because when there is a call to action at the very end of the film, I was ready to answer it. Sharing this story is one way to help Queen Athena protect her herd.—Allison Keene


Watch on Apple TV+

Writer/director Minhal Baig’s Hala is an intimate coming-of-age drama held up by its personal writerly touches and a star-making turn from Geraldine Viswanathan as the title character. Hala’s struggling with the same kinds of things we normally see high school characters struggle with: What to do after graduation, how to manage a relationship with her parents that’s not quite adult and not quite childish, and (of course) boys. Viswanathan’s understated quiet and the warmth in which the situations are shot (almost always centered on her face)—be they at a family dinner or a walk in a Chicago park or a reading of a high school English assignment—make the dramatic ricochet of Hala’s minor rebellion rattle us all the harder. Her relationship with a poetry-loving floppy-haired boy, her parents’ imperfections and a boatload of baggage brought from Pakistan (including the threat of arranged marriage) create a compelling portrait of a family that overcomes Baig’s sometimes sleepy direction. While there’s a lot, probably too much, going on around Hala—to the point that the movie threatens to shake apart—and the film tends to raise issues it’d rather not see through to any sort of conclusion, some striking shots, realistic dialogue (even in that heightened “everything’s the end of the world” way that teens can have) and Viswanathan’s ability to sell it all make the film a worthy and unique entry into the coming-of-age canon.—Jacob Oller


Watch on Apple TV+

The tendency to read too much into Boys State as a representative of American politics—contemporary, functional, broken and otherwise—doesn’t quite line up with the event itself, in which every year the American Legion sponsors a sort of mock government sleepaway camp in Texas for high school boys (girls get a similar program of their own), where attendees join parties, run for office, craft platforms, run campaigns, hold debates, then ultimately exercise their right to vote. As one candidate for fake boy office explains, “My stance on abortion would not line up with most guys’ out there. So I changed my stance. That’s politics…I think. You can’t win on what you believe in your heart.” Money has no place in their policies, nor do women, immigration, or anything that isn’t gun control or abortion. They aren’t much interested in exploring U.S. governmental systems and lawmaking as they are in reinforcing an ideal of obsolescing democratic rule. There is no representation here, there are only screaming masses of peachfuzz and popularity contests. Instead of taking a divided nation’s temperature through its puberty-ridden youth, Jesse Moss and Amanda McBaine’s documentary becomes a dramatic account of modern American masculinity in the making, blisteringly hormonal and desperate to be taken seriously. —Dom Sinacola


Watch on Apple TV+

Sofia Coppola’s new movie On the Rocks starts out as a story of possessive fatherhood, with Felix (Bill Murray) narrating to his teenage daughter, Laura: “And remember, don’t deliver your heart to any boys. You are mine until you get married. Then you’re still mine.” The girl laughs off the declaration as a jape, which turns out to be a catastrophic tactical mistake. In her womanhood, Laura (Rashida Jones), does indeed get married to a man, Dean (Marlon Wayans), and they have two beautiful daughters of their own, eldest Maya (Liyanna Muscat) and youngest Theo (Alexandra Mary Reimer). Dean is spearheading his own startup, a company that provides vaguely sketched-out services but which keeps him not only busy but in constant motion. Laura stays at home with the girls and, when she’s afforded rare moments of peaceful alone time, attempts to write a book the way Sisyphus attempts to push a boulder up a hill. She’s in a rut. Dean’s on the rise. He’s so often cross-country that the yawning gap between them is visible from the stratosphere, and then along comes Felix to sweep Laura up and indulge her fear that Dean in fact might be plowing his assistant, Fiona (Jessica Henwick), a knockout at least 10 years her junior. So begins a caper as Felix, protective by way of outmoded patriarchal charm, endeavors to prove Dean’s infidelity to prop Laura back up using all of his cunning and a not insignificant chunk of his wealth and social capital. On the Rocks suggests that men grown old are really just babies with an insatiable need for the world to love them, their kids—their daughters—in particular. Their childishness is revealed by the volume of their charisma: the taller the tales, the costlier the tab, the more blatant the flirt, the more extravagant the lifestyle, the more a man’s insecurity is revealed. Laura is at once drawn to and repelled by Felix. In light of Felix’s screed to young Laura, this is the inevitable crest of their bond, but Coppola’s gentle, yearning filmmaking generates sympathy for the father and empathy for the daughter. —Andy Crump


Watch on Apple TV+

The black-and-white behind-the-scenes documentary accompaniment to Bruce Springsteen’s album of the same name, Bruce Springsteen’s Letter to You is a beautiful and companionable tour through the music and its making from an American master. Director Thom Zimny buys into the album’s concept, which focuses on just how long Springsteen’s been at this thing. Poignant juxtaposition with archival footage and pictures emphasizes just how long the E Streeters have been at this—and reminds us of who and what was lost along the way. It’s unabashedly emotional throughout and illuminating on occasion, but it’s mostly dedicated to giving Springsteen fans more of the album’s experience: Letter to You’s development is of minor importance in the film, but its performance is exceptional. In between tracks, Springsteen’s intense voiceover hovers over all the right imagery, with funerals, trains, snowy forests and lots of other muscular American iconography flitting by before another song starts up. The design of the documentary may be a bit repetitive, but the musicality is masterful and there’s nothing like letting The Boss extend a hand, inviting us to join in the campfire kind of collaboration with which this album was constructed and the lovely melancholy with which it appreciates the passing of time.—Jacob Oller


Watch on Apple TV+

Werner Herzog will show you multiple clips from Mimi Leader’s Deep Impact for no other reason than because he likes them, he finds them well-done and evocative—he says as much in that even-keeled, oddly accented voice over—then soon after chastise “film school doctrine” when complimenting a field video shot by a South Korean meteor specialist in Antarctica. Like Nomad: In the Footsteps of Bruce Chatwin, his documentary from earlier in the year, Fireball (co-directed with Clive Oppenheimer, with whom he made 2016’s Into the Inferno) is less about what it’s about (meteorites, shooting stars, cosmic debris—and the people who love them) than it is about Werner Herzog’s life, which is his filmography, which is a heavily manipulated search for ultimate truth. This is all he makes movies about anymore: himself, navigating falsehood until he can master it, which is basically what he sees as moviemaking anyway. Unlike Nomad, Fireball is partly shot by Herzog’s trusted cinematographer Peter Zeitlinger, which rewards majestic drone shots—now Herzog’s old man bread and butter—with casual sublimity as often as despairing humor. Together they follow tangents all over the world, ridiculing the depressing Mexican town where a meteorite destroyed the dinosaurs and today stray dogs’ dreams rot from their heads, or collecting microscopic space rocks from the roof of an Oslo sports arena. All is at the mercy of Herzog’s curiosity, ravenous and insatiable. —Dom Sinacola

Watch on Apple TV+

Sometimes a movie so successfully plunges you into its world that it completely engulfs you in a lived-in experience. From the gorgeous, scenic opening moments of CODA, you can almost smell the Atlantic salt air and pungent scent of the daily catch. The movie transports you to Gloucester, Massachusetts and lovingly drops you into the life of one family. Seventeen-year-old Ruby Rossi (Emilia Jones) is what the title of the movie refers to—a child of deaf adults. She is the only hearing member of her immediate family. A senior in high school, Ruby lives with mother Jackie (Marlee Matlin), father Frank (Troy Kotsur) and older brother Leo (Daniel Durant). Every morning before school even begins, Ruby works with her brother and father on their fishing boat off the coast. As the family’s sole interpreter, they have come to rely on her, and she feels the weight of familial responsibility more than most high schoolers. When Ruby joins the school choir, her teacher Bernardo Villalobos (Eugenio Derbez) notices that Ruby has a unique vocal talent. “There are plenty of pretty voices with nothing to say. Do you have something to say?” he asks. He works with her and encourages her to apply to the Berklee College of Music in Boston, a move that would take her away from the family that not only loves but desperately needs her. On the surface, this coming-of-age story is that simple and straightforward. But writer/director Sian Heder weaves a beautiful, nuanced and complex tale buoyed by delicate and deft performances. Although specifically about a Deaf family, the story of a child wanting to form her own identity outside of her parents is universally relatable. It’s no surprise that Matlin is terrific. The Oscar-winner has been knocking it out of the park in both television and movies since she won for Children of a Lesser God when she was just 21. Durant is equally fantastic as a guy eager to prove he can do much more than what society and his parents think he’s capable of. Derbez hits just the right note as the supportive yet demanding teacher who won’t let Ruby use her family as an excuse. As a man who has had people misjudge him his whole life, Kotsur will break your heart. But CODA truly rests on Jones’ very capable shoulders. She’s such a compelling screen presence: Ruby’s inner turmoil is palpable. By the time the movie reaches its poignant, beautiful conclusion, I defy anyone to have a dry eye. CODA is about letting go and letting your loved ones soar.—Amy Amatangelo


Watch on Apple TV+

We could get into plenty of arguments over which Charlie Brown animated special is best, but A Charlie Brown Christmas is my favorite pull of the bunch. Charlie Brown’s confrontation with the Christmas season’s commercialism (back in 1965 no less) and a sad little fir tree make this a cartoon classic, as the ultimate funny-pages shlimazel suffers endless social indignities (no Christmas cards) and the holiday blues. The film remains a touching, funny 25 minutes that connects to kids both young and grown—capturing the spirit of Charles Schulz’s amusingly downer strip—ornamented with slapstick gags and the delightful jazzy Christmas score from the Vince Guaraldi Trio that’s become synonymous with the Peanuts crew. The animation might be a little jagged and repetitive—the child voice acting hit and miss—but the ragtag production helps make it extra endearing, as if the precocious children at the core of the holiday film had a real hand in putting it together. You’re not going to knock this film for those kids doing their weird dances on a loop and neither am I. It just wouldn’t be Christmassy of me.—Jacob Oller


Watch on Apple TV+

Wolfwalkers is filmmaker and animator Tomm Moore’s latest project out of Cartoon Saloon, the animation studio he co-founded in 1999 with Paul Young, and the capper to his loosely bound Irish folklore trilogy (begun with 2009’s The Secret of Kells and continued with 2014’s Song of the Sea). At first blush, the film appears burdened with too much in mind—chiefly thoughts on everything from English colonialism to earnest portraiture of Irish myths, the keystones of Moore’s storytelling for the last decade. Linking these poles are a story of friendship across borders and social boundaries, a dirge for a world pressed beneath the heels of men, a family drama between a willful girl and her loving but overprotective father, and a promise of what life could be if strangers reached across those borders and boundaries to find, if not love, then at least common ground. How Moore and his collaborators Ross Stewart and Will Collins created such a robust screenwriting economy that each of these threads not only fit into Wolfwalkers’ 103 minutes, but feel entirely essential to its vibrance, is likely a whole narrative unto itself. Their collective achievement speaks for itself, of course: Wolfwalkers is a stunning effort, the best of Moore’s career and the best Cartoon Saloon has produced to date. Every detail here, every flourish, has a purpose, whether splashes of red on flower petals, soft edges around dusk-lit trees, or three-panel split screen sequences that read like the pages of illuminated manuscripts brought to life. The effect is magic, and that magic is profound and breathtaking. —Andy Crump

Sun, 02 Oct 2022 12:00:00 -0500 en text/html https://www.pastemagazine.com/movies/apple-tv-/best-apple-movies/?deployment=agilityzone
Killexams : Standalone mode of 5G deployment the way ahead: Ericsson

By Jatin Grover

Telecom equipment maker Ericsson believes that the Indian telecom companies will be forced to switch to standalone mode of deploying 5G network over time because of the growing use cases of 5G technology.

“Let’s say five years into the future, there can potentially be some services that works only on the standalone technology which could push the telecom operators towards this,” Magnus Ewerbring, chief technology officer for Asia-Pacific region at Ericsson, told FE.

The key difference between the two technologies is that in the non-standalone mode, 5G technology is deployed on top of 4G network which means that the devices use existing 4G network for functions such as initiating calls and setting up initial connections, while 5G technology is used for faster data transfers.

Also read: GAIL, Bombay Dyeing, HDFC, Avenue Supermarts, Nykaa, Vedanta, Dilip Buildcon stocks in focus

A non-standalone mode, though, is cost effective but lacks advanced 5G features such as ultra-fast connections which are required by enterprises. In case of the standalone mode, the 5G network deployed is independent of the 4G network.

For Ericsson, India is one of the top markets in terms of business opportunity. The telecom gear maker has tied up with telecom operators to rollout 5G network in the country.

Ericsson is working on both standalone and non-standalone technologies, everywhere in the world. The choice of the architecture depends on the techno-commercial strategy of the operator. The gear maker said that both solutions are stable and very much connected to what the operators want to offer in the market.

“With standalone approach, you (an operator) can enable faster content response time (for users), less interrupt time between technologies etc,” Ewerbring said. Barring Reliance Jio, telecom operators such as Bharti Airtel and Vodafone Idea are following non-standalone approach to launch 5G services in the country because of the much more evolved ecosystem.

Lately, the top two telecom operators in the country were seen criticising each other for their choice of architecture to deploy 5G.

There is a well-developed ecosystem of non-standalone 5G and that all devices work in this mode. In comparison, the standalone 5G architecture doesn’t have a developed ecosystem, Gopal Vittal, managing director and chief executive officer of Bharti Airtel had said in the post-results earnings call with analysts in August.

Also read: Inflation and increasing rate cycle not going anywhere; things stable in India from a longer-term perspective

Vittal had even called Reliance Jio’s purchase of 700 MHz band ‘highly expensive’ without any added advantage. “All it (700 MHz band) does is to provide coverage at the edge, deep indoors, and in far-flung areas, and it gives you 4G-like speeds, nothing more,” he had said during the call.

Spectrum in the 700 MHz band was essential for Reliance Jio to rollout 5G services on a standalone basis.

In counter, Reliance Jio parent company Reliance Industries chairman Mukesh Ambani had said that the non-standalone mode would not deliver optimum 5G experience to the users. “The non-standalone approach is a hasty way to nominally claim a 5G launch, but it won’t deliver the breakthrough improvements in performance and capability possible with 5G,” Ambani had said at the 45th annual general meeting of the company in August.

According to Ewerbring, 5G technology would be relevant for the longest time and at some point 4G will start to decay and at that time standalone technologies would be the key driver.

On the growing trend of open radio access network or Open RAN technology, Ewerbring said that technology will bring more dependencies with the involvement of different vendors, which could lead to a compromise on the technology.

Open RAN technologies involves use of technology and equipments from multiple vendors while deploying telecom network. Such technologies are expected to help the telecom companies save network equipment costs.

Ewerbring believes that the ecosystem for open technologies will take time to develop. At a time when telecom companies are targeting the complete rollout of 5G by next year, they should choose a robust and safer way like that provided by like Ericsson, Nokia, and Samsung, rather than going for an explorative way like open technologies, he said.

Tue, 04 Oct 2022 09:19:00 -0500 en text/html https://www.financialexpress.com/industry/standalone-mode-of-5g-deployment-the-way-ahead-ericsson/2700737/
Killexams : USI steps up deployment in automotive, industrial, and AR fields

Universal Scientific Industrial (USI), a subsidiary of Taiwan's ASE Technology Holding, has stepped up its deployment in the automotive, industrial, and AR device fields.

This year, USI began mass production of automotive-use insulated-gate bipolar transistors (IGBT) and silicon carbide (SiC) power modules.

USI has been expanding its layout in latest years, acquiring French company Asteelflash, Europe's second-largest EMS, in 2020. This year, it will participate in Electronica 2022 held in Munich, Germany. USI's newly established NK2 factory in Nantou, Taiwan is also expected to start operations in fourth-quarter 2022 and will focus on industrial handheld devices, multifunctional notebook docking stations, and wireless network communications products. The factory is expected to create 500 jobs at the Nangang Industrial Park.

USI is an important member of Apple's system-in-package (SiP) supply chain, according to sources. Outside of its main customers, USI's SiP module business is expected to see more than 50% growth with the possibility of exceeding US$400 million in revenue. Good initial response to the iPhone 14 Pro series and Apple Watch Ultra is expected to keep second-half SiP orders and utilization rates in line with traditional peak season levels, the sources said.

ASE and USI continue to invest in emerging technologies, such as gallium nitride (GaN) and Wi-Fi 6/7, despite slowing global consumer electronics demand. USI is also focusing on the automotive, industrial control, and power semiconductor fields.

USI noted that automotive power systems are an important development direction for its automotive electronics. In 2022, USI's automotive power system related business is expected to account for 20% of its total automotive business and up to 50% in 2024-2025. USI's main customers include Tier-1 automakers and automotive Tier-1 suppliers. USI's goal is for its automotive electronics business to reach US$1 billion in 2024. The sources pointed out that USI is already engaged in the automotive power module market.

ASE and USI also have plans to engage in the augmented reality (AR) and virtual reality (VR) sectors. USI already shipped a small number of Wi-Fi/BT wireless communications and SiP modules for AR applications this year. Because AR devices require high-speed, low latency, big data transmission, and a large number of sensors, SiP is expected to be an important technology in this area.

Wed, 21 Sep 2022 21:59:00 -0500 en text/html https://www.digitimes.com/news/a20220921PD214/ase-sip-usi.html
9L0-619 exam dump and training guide direct download
Training Exams List