IMPACT
..building a unique and dynamic generation.
Wednesday, November 20, 2024
Google’s Latest Search Update Suggests Business Owners Need a New Content Marketing Strategy
A new update to Google’s search algorithm has top SEO consultants in agreement: The rules of Google have changed, and the old playbooks need to be rewritten.
Earlier this week, Google released its latest “Core Update,” meaning the tech giant’s search algorithms and systems are being refreshed and adjusted. This is Google’s third core update of 2024, and while SEO experts say the impact from November’s update won’t be known for a few weeks, they expect it to follow a recent trend of punishing websites for producing spammy or AI written content. Their advice to business owners? Quality over quantity.
SEO experts like David Riggs, founder of SEO firm Pneuma Media (No. 170 on the 2024 Inc. 5000) says that Google’s recent efforts are intended to “reduce the impact of gamification” on search results. “The SEO strategy of 2010 was to just throw a bunch of keywords in and it’ll rank,” he says. “Now, it’s very different.”
Riggs says that many of the tricks and techniques that SEO pros used to rely on, like filling up articles with backlinks, publishing short “quick hits,” and creating keyword-filled blog posts, are now being actively disincentivized by Google, as the company attempts to fight against AI-generated content intentionally designed to game the system. “Google caught on and changed the cheat codes,” adds Riggs, “and now you’ve got to change your strategy.”
David Kauzlaric, co-founder of SEO consultancy Agency Elevation (No. 461 on the 2024 Inc. 5000) says that the last two years have seen a flurry of core updates that have totally upended how SEO professionals approach their work. “These updates are helping Google’s users,” he says, “they’re not helping business owners who are trying to do SEO. It makes our job far worse and far harder.”
“If you don’t pivot to provide what Google wants,” Kauzlaric says, “you’re going to continue to see a decline in traffic, because Google is getting very particular.”
How can businesses ensure that their websites and content still rank highly in this new era of Google? Steven Wilson, director of SEO at Above The Bar Marketing (No. 614 on the 2024 Inc. 5000) says if you’re using AI to write full blog posts for your website, you need to stop now. “There is a war on AI,” says Wilson, who adds that his own research has found that “the more AI content you have, the less likely that you’ll show up in search.”
Instead of relying entirely on AI, Wilson recommends writing content in a conversational, more casual tone. “AI can’t do that conversational tone,” says Wilson, who also says business owners should be careful not to produce an overabundance of content just for the sake of getting ranked by Google. Wilson says you can still use AI to help write pieces and optimize headlines, but the majority of the writing should come from a human.
Wilson also recommends limiting the majority of your content to topics relevant to your business and that you are an expert in. Google’s algorithm highly values authors that appear to have authority on certain subjects, so sticking to “topic clusters” in your realm of expertise is an efficient way to build that authority.
Another new strategy that seems to be showing promise is deleting old SEO-focused content from your website. Parker Evensen, founder of digital marketing agency Honest Digital (No. 878 on the 2024 Inc. 5000), says that in previous years, “if you had a lot of authority, you could push out huge quantities of content, and that could help your website. But we’ve found that paring down a lot of that content, especially irrelevant content, can actually help your website.”
“I think what Google is trying to do is get people to stop fighting the algorithm and focus on creating the best, most high-quality content they can,” says Riggs. “They want something from a human perspective that’s creating good value and answering real questions. That’s the content that’s going to win.”
Monday, November 18, 2024
OpenAI, Competitors Look for Ways to Overcome Current Limitations
Artificial intelligence companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever bigger large language models by developing training techniques that use more human-like ways for algorithms to “think”.
A dozen AI scientists, researchers and investors told Reuters they believe that these techniques, which are behind OpenAI’s recently released o1 model, could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for, from energy to types of chips.
OpenAI declined to comment for this story. After the release of the viral ChatGPT chatbot two years ago, technology companies, whose valuations have benefited greatly from the AI boom, have publicly maintained that “scaling up” current models through adding more data and computing power will consistently lead to improved AI models.
But now, some of the most prominent AI scientists are speaking out on the limitations of this “bigger is better” philosophy.
Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training — the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures — have plateaued.
Sutskever is widely credited as an early advocate of achieving massive leaps in generative AI advancement through the use of more data and computing power in pre-training, which eventually created ChatGPT. Sutskever left OpenAI earlier this year to found SSI.
Growth and stagnation
“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said. “Scaling the right thing matters more now than ever.”
Sutskever declined to share more details on how his team is addressing the issue, other than saying SSI is working on an alternative approach to scaling up pre-training.
Behind the scenes, researchers at major AI labs have been running into delays and disappointing outcomes in the race to release a large language model that outperforms OpenAI’s GPT-4 model, which is nearly two years old, according to three sources familiar with private matters.
The so-called ‘training runs’ for large models can cost tens of millions of dollars by simultaneously running hundreds of chips. They are more likely to have hardware-induced failure given how complicated the system is; researchers may not know the eventual performance of the models until the end of the run, which can take months.
Another problem is large language models gobble up huge amounts of data, and AI models have exhausted all the easily accessible data in the world. Power shortages have also hindered the training runs, as the process requires vast amounts of energy.
To overcome these challenges, researchers are exploring “test-time compute,” a technique that enhances existing AI models during the so-called “inference” phase, or when the model is being used. For example, instead of immediately choosing a single answer, a model could generate and evaluate multiple possibilities in real-time, ultimately choosing the best path forward.
This method allows models to dedicate more processing power to challenging tasks like math or coding problems or complex operations that demand human-like reasoning and decision-making.
“It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer,” said Noam Brown, a researcher at OpenAI who worked on o1, at TED AI conference in San Francisco last month.
OpenAI has embraced this technique in their newly released model known as “o1,” formerly known as Q* and Strawberry, which Reuters first reported in July. The O1 model can “think” through problems in a multi-step manner, similar to human reasoning. It also involves using data and feedback curated from PhDs and industry experts. The secret sauce of the o1 series is another set of training carried out on top of ‘base’ models like GPT-4, and the company says it plans to apply this technique with more and bigger base models.
Competition ramps up
At the same time, researchers at other top AI labs, from Anthropic, xAI, and Google DeepMind, have also been working to develop their own versions of the technique, according to five people familiar with the efforts.
“We see a lot of low-hanging fruit that we can go pluck to make these models better very quickly,” said Kevin Weil, chief product officer at OpenAI at a tech conference in October. “By the time people do catch up, we’re going to try and be three more steps ahead.”
Google and xAI did not respond to requests for comment and Anthropic had no immediate comment.
The implications could alter the competitive landscape for AI hardware, thus far dominated by insatiable demand for Nvidia’s AI chips. Prominent venture capital investors, from Sequoia to Andreessen Horowitz, who have poured billions to fund expensive development of AI models at multiple AI labs including OpenAI and xAI, are taking notice of the transition and weighing the impact on their expensive bets.
“This shift will move us from a world of massive pre-training clusters toward inference clouds, which are distributed, cloud-based servers for inference,” Sonya Huang, a partner at Sequoia Capital, told Reuters.
Demand for Nvidia’s AI chips, which are the most cutting edge, has fueled its rise to becoming the world’s most valuable company, surpassing Apple in October. Unlike training chips, where Nvidia dominates, the chip giant could face more competition in the inference market.
Asked about the possible impact on demand for its products, Nvidia pointed to recent company presentations on the importance of the technique behind the o1 model. Its CEO Jensen Huang has talked about increasing demand for using its chips for inference.
“We’ve now discovered a second scaling law, and this is the scaling law at a time of inference … All of these factors have led to the demand for Blackwell being incredibly high,” Huang said last month at a conference in India, referring to the company’s latest AI chip.
Sunday, November 17, 2024
Crypto’s Year of Capitulation Is a Joke
Thank God for the crypto visionaries, who heroically declared “We’re leaving banking and finance in the dust” and are now huddled in a panic room, clutching a single American flag. Listen closely and you’ll hear them whispering: “We simply cannot innovate without first being kissed on the forehead by the new president.”
To put it another way, crypto’s capitulation to trad markets and politics over the past year is an absolute joke.
When Bitcoin emerged in 2009, the concept was revolutionary. It promised a decentralized currency that operated without the oversight of banks or governments. In Satoshi Nakamoto’s groundbreaking whitepaper, Bitcoin was described as a “peer-to-peer electronic cash system.” This ambition was radical in its simplicity. Bitcoin offered a way to bypass intermediaries entirely. It would grant people the ability to control their financial interactions and assets.
From day one, Bitcoin was all about decentralization, sticking it to the banks and tearing down the financial establishment. Cut out the middlemen, they said. Liberate the masses, they said. It was a vision of freedom with a side of chaos.
Crypto’s Promise Versus Reality
Then came the gold rush. Bitcoin’s value exploded, altcoins multiplied like weeds, and DeFi platforms popped up. Each one claimed it was about to overthrow Wall Street any day now. True believers swore that crypto could make banks obsolete, that it was building a utopian financial playground where everyone—especially the people the banks ignored—could finally get ahead.
Since then, the same banks and corporations that once sneered at crypto as a scam are now jumping on the bandwagon, especially through shiny new Bitcoin exchange-traded funds. With the U.S. Securities and Exchange Commission’s blessing earlier this year, Wall Street can now offer “crypto exposure” without anyone having to get an actual coin. Such heavyweights as BlackRock and Fidelity wasted no time launching their own ETFs. Institutional money is flooding in.
Crypto firms that once swore to disrupt the system are bending over backward to join it. In the U.K., where the Financial Conduct Authority barely approves a fraction of crypto applications, companies are eagerly adopting know-your-customer and anti-money-laundering protocols. Just to get a foot in the door. The “movement” that should have been finance’s punk rock is now happily cozying up to traditional finance, trading rebellion for respectability.
From Crypto Visionaries to Sell-Outs
In 2024 alone, crypto firms and influencers have shelled out millions in political contributions, with Coinbase and their crew leading the charge, all to butter up the right people and lock down favorable regulations. Lobbying, schmoozing, and campaign donations. It’s a long way from decentralization and “power to the people.”
Companies are now openly aligning with politicians who wave the pro-crypto flag—such as former President Donald Trump, who’s been cheerleading for Bitcoin and the whole digital currency circus. The anti-establishment rebellion is another talking point for politicians who smell votes and dollar signs.
By hitching themselves to politicians and pushing agendas, crypto leaders risk turning the whole industry into just another lobby group clawing for a slice of influence in the swamp of power games. The more idealistic crowd—myself included—see this as a total betrayal of what crypto was supposed to stand for.
Crypto’s got itself a civil war, and it’s as messy as you’d expect. On one side, you’ve got the pragmatists, grumbling about how “mainstream adoption” might require a little soul-selling. Or a lot of soul-selling. Or a damned fire sale.
As the debate rages across Twitter threads, Warpcast, and Discord servers, the real question looms: Can crypto stay true to its anti-establishment roots—or did it already sell out the minute someone printed a whitepaper in Helvetica?
Maybe that’s just the natural life cycle of any “revolution.” Sooner or later, everything goes Hot Topic.
First, you’re the scrappy underdog, shaking your fist at the establishment, shouting about freedom and autonomy. Then you get a taste of the good life—private jets, Davos invites, a little pat on the head from your friendly neighborhood investment banker. Suddenly, you’re not so different from the suits you swore to dethrone. At some point, the righteous battle cry of “decentralize everything” turns into “well, maybe just a little centralization… for regulatory purposes.”
Too Late for a Revolution?
Do I still think crypto matters? In some ways, yes. I know, I know. It’s a lonely hill to die on.
But somewhere under all the jargon, lobbying dollars, and Wall Street handshakes, I still believe there’s a spark left, a shot at reclaiming crypto’s anarchic roots. A system that empowers the individual, shakes off the leeches, and actually challenges the entrenched power structures instead of just asking to sit with them.
If you dig deep enough, there’s still a chance to resurrect that original spark—to build something that truly stands outside the walls of power, rather than bending a knee to get inside them. Because if crypto’s going to mean anything at all, it has to remember what it set out to destroy. Before it becomes just another face in the crowd.
Otherwise, the “decentralized revolution” that spent a decade screaming about autonomy will keep showing up to the big leagues begging for a seat at the same rotten table it swore to flip.
EXPERT OPINION BY JOAN WESTENBERG, FOUNDER AND CEO, STUDIO SELF @JOANWESTENBERG
Wednesday, November 13, 2024
Forget the Nanny, Check the Chatbot. AI May Soon Help With Parenting
As AI technology advances, it’s natural that startups and big tech names want to profit off the revolution by finding ways to put it into more corners of everyday life. Current examples include applications that help you out at the office, assisting in fighting employee burnout, and in more intimate, subtle scenarios like health care. Now, according to Andreessen Horowitz partner Justine Moore, AI is set to help out with something very “human” indeed: the complex, stressful, heartfelt, wonderful job of being a parent.
In a posting on X yesterday, reported by news site TechCrunch, Moore posited an interesting question: “What if parents could tap into 24/7 support that was much more personal and efficient?” The idea is simple, on its face—we’ve been busy loading up all these super-smart AI systems with megatons of real-world data, tapping into it for help making decisions like, “Which marketing campaign should our startup use?”
Within all that data is lots of very practical material, too, including advice that may help a stressed-out parent trying to solve a tricky moment with the kids. Unlike friends and family and even many sources of professional human help, an AI assistant is also always available … even when it’s 3 a.m. and your infant has just thrown up all over the nursery.
Moore went a step further, TechCrunch noted, highlighting what she called a new “wave of ‘parenting co-pilots’ built with LLMs and agents.” Moore touted the opportunity to develop dedicated family-focused AI tools with specialist knowledge and expertise—specific variants of the large language model (LLM) chatbot tech that we’re all getting used to. She suggested that the upcoming wave of AI agents, which are small AI-powered tools that can perform actions all by themselves in a digital environment, could help too. It’s easy to imagine the usefulness of an AI agent that almost instantly finds a deal on the brand of disposable diapers you like and then have them delivered when you need them.
But Moore also highlighted several startups with innovative tech to help with parenting, including Cradlewise, which uses AI connected to a baby monitor to help analyze a baby’s sleep pattern—and even rock the crib. There’s also the opportunity for this sort of AI system to be “always in your corner,” Moore said, ready to just listen to your emotional outbursts, even if they happen just after 3 a.m. while your partner is sleeping and you’re cleaning up baby vomit.
Moore’s words may evoke memories of the Eliza program among tech-savvy readers. It’s a bit of a deep cut, but this was developed way back at the end of the 1960s, and was one of the very first chatbots. Primitive as it seems now, Eliza paved the way for lots of much smarter tech that followed it, not least because it was thought by some medical professionals to offer benefits to patients that chatted to it. A 21st-century, parenting-focused AI Eliza could play a role in helping new parents navigate all the challenges of rearing kids.
It’s certainly an idea that may be having its moment. In a post on self-described parenting platform Motherly in April, writer Sarah Boland described what she said was an “unpopular opinion,” and noted that she was using AI to help her parent, including for simple things like task planning. And, in May, popular site Lifehacker set out a list of ways AI can help you with parenting jobs.
But why should we care specifically about Moore’s social media musings?
Firstly, because of whom she works for. Venture capital firm Andreessen Horowitz is one of the biggest names in the business, and it’s recently been heralding a “new era” in venture funding with a $7.2 billion fund it’s drawn together. If a partner at a firm like this, which has already shown its positive thinking about AI technology, takes time to highlight a whole new area that a buzzy tech may be set to exploit, it’s worth paying attention.
The parenting business is already lucrative—analysis site Statista pegs the parenting mobile app global market alone as likely growing to $900 million by 2030. Though it may seem a “soft” market that’s more about human feelings than high tech, technology has been becoming a part of child-raising for years. If your AI startup is looking for unexpected ways to leverage your innovation, perhaps it’s time to consider how you could help raise the next generation of kids. They’ll be the first to be born into a world where AI is normal.
Just be thoughtful and perhaps a little wary. AI tech is not without some risks, especially when it comes to younger or more vulnerable users.
BY KIT EATON @KITEATON
Subscribe to:
Posts (Atom)