Home Blog Page 31

Google’s Roadmap For Porting Android Features Onto RISC-V Architecture

Porting Android Features Onto RISC-V

Just a few weeks ago, Google and Qualcomm announced their partnership to bring Android to RISC-V. Now, the tech giants are lifting the curtain, revealing a comprehensive plan that could reshape the landscape of Android development and device manufacturing by porting Android features onto RISC-V.

Google has laid out a detailed roadmap for OEMs and app developers to transition to RISC-V. The company aims to provide its partners with comprehensive support, similar to what is currently available. 

According to Google’s official blog, RISC-V is a modular ISA with a large number of optional extensions. Google has identified a critical set of features, including the rva22 profile and vector crypto extensions, to ensure high performance on any CPU running RISC-V.

The Android Runtime (ART), the engine that powers Android apps, is already compatible with RISC-V CPUs, thanks to a series of patches. However, Google admits that optimization is still a work in progress. This is particularly true for the backend of ART, which has yet to be fine-tuned for maximum performance.

By the end of this year, Google aims to finalize the Native Development Kit (NDK) binary interface. This will pave the way for initial test builds that can emulate RISC-V Android apps on x86 and ARM host machines. These emulators are slated for a public release in early 2024, allowing for comprehensive app testing across all Android functions and device categories.

Android for RISC-V is already functional in Google’s virtualization solution, Cuttlefish. Furthermore, the rva22 instruction set profile, complete with vector and vector crypto extensions, will be the backbone of Android on RISC-V.

Google is also actively working on RISC-V support tools and the broader software ecosystem through the Rise project. This initiative involves numerous partners from both the hardware and software industries, aiming to accelerate the availability of software for high-performance and power-efficient RISC-V processor cores running high-level operating systems like Android and Linux.

Alibaba T-Head has reported significant progress in porting Android features onto RISC-V-based Xuantie cores. Their efforts have focused on Android 12 and enabling third-party modules to support video, camera, and Wi-Fi/Bluetooth features based on RISC-V. According to RISC-V’s blog, they have also provided insights into building TensorFlow Lite models on RISC-V-based cores.

Google’s meticulous planning and transparent communication signal a new era for Android — one that promises greater efficiency, flexibility, and collaboration.

15th Anniversary of the Bitcoin Whitepaper: How Satoshi Nakamoto Changed Finance Forever

Bitcoin Whitepaper Anniversary

Fifteen years ago, on October 31, 2008, an enigmatic figure named Satoshi Nakamoto introduced the world to Bitcoin through a nine-page whitepaper. Today, Bitcoin is not just a cryptocurrency; it’s audacity of Nakamoto’s vision — a peer-to-peer electronic payment system that has defied skeptics, survived scandals, and become a financial juggernaut.

A Whitepaper That Shook the World

October 31, 2008, Satoshi Nakamoto published the Bitcoin whitepaper. Titled “Bitcoin: A Peer-to-Peer Electronic Cash System,” this seminal document laid the groundwork for the world’s first cryptocurrency. It aimed to solve the “double spending” problem often associated with digital currency through a network of nodes and a proof-of-work consensus mechanism.

But the actual network wouldn’t come to life until three months later. The Bitcoin network came to life on January 3, 2009, when Satoshi mined the first block in Helsinki. The reward for mining this block was 50 BTC. The ease of mining back then allowed anyone with a regular computer to participate, democratizing the financial landscape.

On January 12, 2009, the world witnessed the first-ever Bitcoin transaction. In block 170, Satoshi Nakamoto transferred 10 BTC to developer Hal Finney.

Initially, Bitcoin was met with skepticism. However, nine months after its inception, it made its debut on the New Liberty Standard exchange, where 1,309 bitcoins could be purchased for just $11. Fast forward to November 2021, and those coins would be worth $45 million1. Despite facing several challenges, including a “crypto winter” in 2022 due to global economic issues and industry bankruptcies, Bitcoin has proven its resilience.

Bitcoin has come a long way since its inception. It was one of the first inventions to use cryptography to separate money from the state, enabling users to bypass banks and financial institutions — even becoming legal tender in El Salvador. The first real-world transaction paid for in Bitcoin came from Laszlo Hanyecz in May 2010, who bought two pizzas for 10,000 BTC.

As of October 31, the 15th anniversary of the Bitcoin whitepaper, the cryptocurrency is trading at approximately $34,200, with a market capitalization of $668 billion, according to Coinmarketcap

Apple M3 Chips: The Next-Gen Revolution in MacBook Pro and iMac

Apple M3-M3 Pro-M3 Max

Apple has unveiled its next-generation M3 family of chips, accompanied by new MacBook Pro and iMac models. The latest member of the Apple Silicon family promises unprecedented performance, efficiency, and capabilities, setting a new standard for both casual users and professionals.

Apple M3 Family:

The M3 family of chips, which includes the M3, M3 Pro, and M3 Max, is built on industry-leading 3-nanometer technology. Resulting in a faster and more efficient next-generation GPU, marking the biggest advancement in graphics architecture ever for Apple silicon. The M3 chips also introduce a groundbreaking technology called Dynamic Caching. This feature allows the GPU to allocate the exact amount of local memory needed for each task in real-time, significantly boosting GPU utilization and performance.

Specifications of M3, M3 Pro, and M3 Max:

Feature / Chip ModelM3 (Basic)M3 ProM3 Max
CPU Cores8-core12-core16-core
GPU CoresUp to 10-core18-coreUp to 40-core
Memory SupportUp to 24 GBUp to 36 GBUp to 128 GB
Dynamic CachingYesYesYes
Codecs SupportedAV1, H.264, Prores, Prores-RAW, HEVCAV1, H.264, Prores, Prores-RAW, HEVCAV1, H.264, Prores, Prores-RAW, HEVC
Special Features40% faster than M1 Pro (16-inch model)50% more CPU performance and 20% more GPU performance compared to predecessors
Technology3-nanometer3-nanometer3-nanometer
Ray Tracing and Mesh ShadingYesYesYes
Battery Life in MacBook ProUp to 22 hoursUp to 22 hoursUp to 22 hours
3nm Apple M3-M3 Pro-M3 Max

Performance Metrics:

According to Apple, the M3’s performance cores are 15% faster than the M2 and 30% faster than the M1. The efficiency cores have also seen a significant boost, with a 30% and 50% increase in speed compared to the M2 and M1, respectively. When pitted against an Intel Core i7-1360P, the M3 CPU, with its 4 performance cores and 4 efficiency cores, outperforms the Intel chip by over 10%, all while consuming just a quarter of the power.

GPU Capabilities:

The M3’s GPU is almost 70% faster while consuming a fifth of the power compared to its predecessors. It also supports a wide array of codecs, including the new AV1 codec, in addition to H.264 acceleration, Prores, Prores-RAW, and HEVC. The neural engine responsible for AI acceleration has been improved by around 15% compared to the M2 and a staggering 60% compared to the M1.

The M3 in New MacBook Pros and iMacs

The new MacBook Pro models can be configured with all three M3 processors, offering up to a 16-core CPU and 40 GPU cores. They promise twice the rendering speed in Cinema 4D compared to their M2-equipped predecessors. The iMac comes with the basic M3 version but is said to be twice as fast as its M1 predecessor.

MacBook Pros and iMacs

Both the MacBook Pro and iMac models feature a Liquid Retina XDR display that is 20% brighter, a built-in 1080p camera, an immersive six-speaker sound system, and a wide array of connectivity options, including Thunderbolt 4 and HDMI. The new MacBook Pro offers up to 22 hours of battery life and can be charged via MagSafe 3 or USB Type-C.

The 14-inch MacBook Pro with M3 starts at $1,599, while the M3 Pro and M3 Max models start at $1,999. The 16-inch MacBook Pro starts at $2,499. The iMac with the M3 SoC starts at $1,299.

The Cost of Zuckerberg’s Metaverse Dream: $50 Billion and Counting

Meta Metaverse Has Not Found Popularity

Facebook’s parent company, Meta, has seen its Metaverse division drain nearly $50 billion in less than five years, according to a recent Business Insider report.

Reality Labs, the division responsible for Meta’s Metaverse and virtual reality solutions, has been a financial sinkhole since its inception. Starting in 2019 with a loss of around $5 billion, the division’s losses doubled to $10 billion in 2021, reached $14 billion in 2022, and have already surpassed $11 billion in the first nine months of 2023. The cumulative loss now stands at a jaw-dropping $47 billion.

Despite these astronomical figures, Meta remains undeterred. The company stated last week, “We expect Reality Labs’ operating losses to increase significantly in 2024.” This comes as part of Meta’s long-term vision for the Metaverse, a vision that includes significant investments in research and development. The company believes that these investments will yield innovative products and technologies that will be fully realized over the next decade.

Although Meta has made some strides in its Metaverse development. Horizon Worlds, Meta’s virtual world, will soon be accessible not just through VR headsets but also via web browsers and mobile apps. This is a significant development, considering the platform’s previous limitations. Additionally, the Reality Labs division recently launched Meta Quest 3, a mixed-reality headset that marks another step in Meta’s hardware evolution.

However, the Metaverse has struggled with low user engagement, a fact that even Meta’s internal teams acknowledge. Employees within the company are reportedly not frequent users of the virtual world, raising questions about its long-term viability. This lack of user engagement is particularly concerning given the division’s soaring operational costs.

Interestingly, the company reported a 23% increase in sales for the third quarter of 2023, totaling $34.15 billion. Profits surged by 164% to $11.58 billion. These gains were primarily driven by a 31% increase in advertisements across Meta’s various platforms, including Facebook, Instagram, WhatsApp, and Threads. The number of daily users for these services has also increased by 7%, totaling 3.14 billion people.

While Meta’s overall financial performance has been robust, the Reality Labs division remains a significant concern. The company’s total expenses have fallen by 7%, now standing at $20.4 billion, partly due to layoffs in the past year. Yet, the Reality Labs division continues to be a financial burden, raising questions about the feasibility of Meta’s ambitious 10-year plan for the Metaverse.

U.S. States Take Legal Action Against Meta Over Youth Mental Health

Meta Platforms news channel

41 U.S. states, including powerhouses like California and New York, have filed lawsuits against Meta Platforms Inc., the parent company of Facebook and Instagram. The lawsuits accuse Meta of knowingly designing addictive features that have contributed to a youth mental health crisis in America.

The lawsuit, filed in a federal court in California, is spearheaded by a bipartisan coalition of attorneys general from states such as California, Florida, Kentucky, Massachusetts, Nebraska, New Jersey, Tennessee, and Vermont. The complaint alleges that Meta has “harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens.” The motive behind these actions, according to the lawsuit, is profit. Meta is accused of misleading the public about the substantial dangers of its platforms and exploiting its most vulnerable users — children and teenagers.

The lawsuit also accuses Meta of violating federal children’s privacy laws by collecting data from underage users. This adds another layer to the already complex legal landscape that Meta finds itself navigating. The states involved in the lawsuit are demanding various remedies, including substantial civil penalties.

The lawsuit comes on the heels of a series of damning reports, including those published by The Wall Street Journal in the fall of 2021. These reports were based on Meta’s own internal research, revealing that the company was fully aware of the harmful effects its platforms could have, especially on teenage girls. 

The internal reports, leaked by former Facebook product manager Frances Haugen, indicated that Instagram was a contributing factor to depression, anxiety, and suicidal thoughts for a significant number of its users. Specifically, 13% of young British women and 6% of young American women cited Instagram as a driving force behind their suicidal ideation.

Meta’s Response:

In a statement, Meta spokesperson Liza Crenshaw expressed disappointment over the lawsuit, suggesting that the attorneys general should have collaborated with the industry to establish age-appropriate standards. Despite this, Meta has taken some steps to mitigate criticism, such as pausing plans for an Instagram app for children under 13 and introducing parental control tools.

However, these measures seem to be a drop in the bucket compared to the scale of the problem. With nearly 40% of Instagram’s one billion users being under the age of 24, the issue is far from resolved.

According to the Pew Research Center, almost all teens aged 13 to 17 in the U.S. report using a social media platform, with about a third saying they use social media “almost constantly.” This universal usage underscores the urgency of addressing the mental health implications of social media use among teens.

Understanding CAPTCHAs: How These Puzzles Keep the Internet Human

Understanding CAPTCHAs

You’ve seen them, you’ve solved them, but have you ever stopped to wonder how CAPTCHAs actually work? These seemingly simple puzzles are a crucial line of defense in the digital world, protecting websites from bots and automated attacks. But there’s more to CAPTCHAs than meets the eye.

What Exactly is a CAPTCHA?

CAPTCHA is an acronym that stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” Despite its complex-sounding name, the purpose of a CAPTCHA is quite straightforward: it’s designed to figure out if you are a human or a computer program. The concept was introduced in 2000 by researchers Luis von Ahn, Manuel Blum, and Nicholas J. Hopper from Carnegie Mellon University, along with John Langford from IBM.

You’ve likely encountered CAPTCHAs when logging into websites, filling out online forms, or even making online purchases. They usually appear as a small test that you must pass to prove you’re human. These tests can come in various forms. The most common one involves typing out distorted letters and numbers that appear on the screen. The idea is that a human can easily read these distorted characters, but a computer program, like a bot, would find it challenging.

CAPTCHAs serve a vital role in internet security. They protect websites from being abused by automated programs or bots. These bots are often designed to carry out tasks like spamming a service, scraping data, or brute-forcing passwords. By adding a CAPTCHA, websites can ensure that their services are used by actual humans, thereby providing a first line of defense against various types of online abuse.

So, in simple terms, a CAPTCHA is like a security guard that stands at the entrance of a website, asking a simple question to make sure you’re a human and not a robot trying to sneak in.

How Do CAPTCHAs Work?

In a standard CAPTCHA test, the distorted text is generated randomly and presented to the user. The user then has to type these characters into a form field. Once you enter the correct sequence and hit the “Submit” or “Verify” button, the website confirms you’re a human and allows you to proceed. This is the most traditional form of CAPTCHA, and it’s still widely used today. Although there are several other types of CAPTCHAs too, that we have explained below.

How Do CAPTCHAs Work

The Role of Distortion

The reason the text is distorted is to throw off bots. While humans have the ability to recognize distorted characters, bots find it challenging. They are programmed to read standard text, so when they see distorted text, they can’t interpret it. This is why you, as a human, have to step in and type the correct characters.

Time Limits

It’s worth noting that CAPTCHAs usually have a time limit. If you don’t complete the test within a certain period, it will expire, and you’ll have to start over. This is another feature designed to make it difficult for bots to get through.

Types of CAPTCHAs:

Advanced Versions: reCAPTCHA

Over time, CAPTCHAs have evolved to become more sophisticated. One advanced version you might have encountered is Google’s reCAPTCHA. Instead of just typing text, you might be asked to select all the squares in a grid that contain a certain object, like a traffic light or a fire hydrant. This task is relatively easy for a human but extremely difficult for a bot.

Audio CAPTCHAs

For those who have difficulty seeing or reading, many CAPTCHA systems offer an audio option. In this version, you’ll hear a sequence of numbers or letters and will be asked to type them in. This makes CAPTCHAs accessible to users with visual impairments.

Interactive CAPTCHAs

Some websites have started using interactive CAPTCHAs that require users to drag and drop objects into a puzzle. For example, you might have to drag a piece of fruit into a basket among a set of random items. The way you interact with these elements — how long it takes you, the paths you move along, the choices you make — provides additional data points that help the system determine whether you’re human.

Behavioral Analysis

Some modern CAPTCHAs don’t even require you to enter text or select images. They analyze your behavior as you interact with the website. For example, the way you move your mouse may be enough to prove you’re a human. These are known as “No CAPTCHA” tests, and they’re becoming more common.

CAPTCHAs on Mobile Devices

With the increasing use of smartphones, CAPTCHAs have had to adapt. On a mobile device, you might encounter CAPTCHAs that take advantage of the device’s features, like its touchscreen. You could be asked to trace a shape or connect dots in a specific order. These tasks are easy for a human but remain challenging for a bot, especially one that’s not optimized for touch interactions.

Multi-Factor Authentication and CAPTCHAs

Some websites use CAPTCHAs in conjunction with other forms of verification, like two-factor authentication (2FA). In such cases, even if a bot somehow manages to solve the CAPTCHA, it would still need to bypass the additional security layer, making the system even more secure.

The Role of Machine Learning

As machine learning technologies advance, there’s a race between making CAPTCHAs more complex and the development of bots that can solve them. Some bots now use machine learning algorithms to interpret distorted text or even recognize basic objects in images. This has led to the development of even more advanced CAPTCHAs that use dynamic elements, changing in real-time, to thwart such bots.

The Future: Biometric CAPTCHAs?

As technology continues to evolve, we may soon see the introduction of biometric CAPTCHAs, which could use fingerprint or facial recognition to verify identity. While this could offer a higher level of security, it also raises important questions about privacy and data protection.

Conclusion

CAPTCHAs act as a crucial barrier that keeps automated bots at bay, safeguarding the integrity of websites and online services. As technology advances, CAPTCHAs continue to evolve, becoming more sophisticated to meet new challenges. So, the next time you encounter one of these puzzles, take a moment to appreciate the complex technology that works tirelessly to make the internet a safer place for us all.

Google Spent $26 Billion to Secure Its Position as Default Search Engine

Google Spent 26 Billion to Secure Its Position as Default Search Engine

Google spent an eye-watering $26.3 billion in 2021 to secure its position as the default search engine on mobile phones and web browsers. This disclosure, made public during a federal antitrust trial, has ignited a firestorm of debate about the future of online search and competition within the tech sector.

Bloomberg was the first to report this figure, which was confirmed by a Google manager. The case at hand involves lawsuits filed by the U.S. government and multiple states, accusing Google of stifling competition. Google, however, vehemently denies these allegations, arguing that users prefer its search engine due to the quality of search results.

The lion’s share of this colossal sum is believed to go to Apple. The Cupertino-based tech giant is said to have pocketed around $18 billion in 2021 alone, according to a New York Times report. This hefty payout is likely the catalyst behind Apple’s negotiations with Microsoft to potentially adopt Bing as its default search engine.

But Apple isn’t the only beneficiary. Google’s web of contracts extends to other major players in the tech industry. These include popular device manufacturers like LG, Motorola, and Samsung, as well as major U.S. wireless carriers such as AT&T, T-Mobile, and Verizon. Browser developers like Mozilla, Opera, and UCWeb also secure default status for Google’s search engine, often at the exclusion of Google’s competitors.

The U.S. Department of Justice and a coalition of state attorneys general have accused Google of illegally maintaining its monopoly power in general search. They argue that Google’s massive spending locks rivals out of key distribution channels, such as Apple’s Safari web browser — makes the search engine sector less competitive.

John Schmidtlein, Google’s legal counsel, defended the company’s position at the onset of the trial. He argued that changing the default search engine is a straightforward process for users. This, he claims, is especially true on Windows computers where Microsoft’s Bing is preset as the default search engine. Yet, users overwhelmingly choose Google, a testament to the quality of its search results.

The court documents also revealed that Google’s search division, labeled as “Google Search+ Margins,” generated more than $146 billion in revenue in 2021. This is a significant increase from 2014 when the division made about $47 billion and paid roughly $7.1 billion for default status. The revenue for Google’s Search+ has nearly tripled between 2014 and 2021, while its Traffic Acquisition Costs (TAC) have almost quadrupled.

The revelation of Google’s $26 billion expenditure raises critical questions about the future of online search and the tech industry at large. Will Google’s massive investment pay off, or will it backfire, opening doors for competitors to swoop in?

Google Can Transform ANC Earbuds Into Heart Rate Monitors With a Simple Software Update

Transform ANC Earbuds Into Heart Rate Monitors

According to recent research, Google has developed a way to transform active noise-canceling (ANC) earbuds into heart rate monitors, all with a simple software update. In this way, manufactures can offer a discreet and cost-effective alternative to smartwatches and other wearable devices.

The technology is rooted in a method called audioplethysmography (APG). This approach uses sound waves to measure blood flow in the ear canal, allowing for the monitoring of pulse rate and heart rate variability. The ear canal is an ideal location for such measurements due to its proximity to a major artery. The deep ear artery forms an intricate network of smaller vessels that extensively permeate the auditory canal, making it ideal for accurate heart rate monitoring.

The ANC earphones come equipped with built-in microphones that play a crucial role in this technology. By sending low-intensity ultrasound signals through the earphone’s speakers, the technology triggers echoes. These echoes are then picked up by onboard feedback microphones. The tiny ear canal skin displacement and heartbeat vibrations modulate these ultrasound echoes, which are then processed into a heart rate reading as well as heart rate variability (HRV) measurement.

Google’s method has been rigorously tested and has shown an impressive accuracy rate of over 97%. The technology also exhibits high resilience to motion artifacts and remains unaffected by seal conditions and skin tones. This makes it a more reliable and inclusive health sensor compared to existing methods.

The researchers believe that this APG approach is superior to traditional methods that require the integration of photoplethysmograms (PPG) and electrocardiograms (ECG) sensors, as well as a microcontroller, into the earplugs. Such integration would inevitably add to the cost, weight, power consumption, acoustic design complexity, and form factor challenges, constituting a strong barrier to its wide adoption.

While the technology has not yet been commercially implemented, it holds immense promise. If validated through peer reviews and certifications, Google could potentially include this technology in its products, revolutionizing the way we think about health tech. The technology is the result of collaboration across Google Health, product, UX, and legal teams, and its integration into products like Pixel Buds is far from guaranteed at this point.