Stay Ahead, Stay ONMINE

Five takeaways from Cisco Live EMEA

4. Europe is on the AI clock Each wave of technology creates an opportunity for the different regions of the globe to establish themselves as a leader. The battle for AI supremacy is well underway with the US having a strong foothold. The Middle East is likely to have an impact, as many countries there […]

4. Europe is on the AI clock

Each wave of technology creates an opportunity for the different regions of the globe to establish themselves as a leader. The battle for AI supremacy is well underway with the US having a strong foothold. The Middle East is likely to have an impact, as many countries there have made commitments to investing in this area.

During a press Q&A, Oliver Tuszik, president of EMEA for Cisco, talked about the opportunity for Europe and the need to act as one. He commented that often, the larger EU countries, such as Germany and France, tend to act in their own best interests. But that may not have the best long-term outcome. While they carry economic heft within the EU, each individual country is small when compared to the likes of the US, India, the Middle East region and others. But the EU as a whole carries significant weight.

Tuszik expressed some urgency for the EU to act as one and is optimistic that country leaders are aligned on this. He pointed to the EU AI Act as a proof point that the region understands what’s at stake. He added that the structure of the AI Act being outcome based versus overly regulated is another indicator of change within the EU. Time will tell if Tuszik’s optimism is warranted, but we shouldn’t have to wait long to tell.

5. Silicon One remains Cisco’s best kept secret

Cisco is one of the few network vendors that manufactures its own silicon. The tech industry tends to gravitate to certain trend that become absolutes, such as “everything is moving to the cloud.” For years, in networking, doing more in software was all the rage. But it’s always a combination of things that create a great experience. In networking, one can do a lot in software, but there are certain tasks, such as managing traffic, deep packet inspection and buffering, that are best done in silicon.

A good way to understand the importance of custom silicon is to look no further than the AI space. Nvidia initially made a GPU because general purpose processors could not handle high performance graphics. Similarly, Cisco makes Silicon One because the network has unique requirements that don’t perform well with off-the-shelf chips. Initially, Silicon One was used for Cisco to regain a foothold with the hyperscalers, but the company has done an excellent job of bringing the benefits of Silicon One to the rest of its product line, including the above-mentioned N9300.

Given the price/performance and feature consistency benefit Cisco gets with Silicon One, I’m surprised at the lack of marketing and related awareness of it. Over the last year or so, I’ve seen Martin Lund, executive vice president of the common hardware group at Cisco, get in front of customers, press and analysts more often. But articulating the benefits is something Cisco should double down on rather than it being a “best kept secret.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Lenovo renews ThinkSystem lineup for AI workloads and more

SR650a V4: a 2U2S design specifically meant for high-end use, such as AI and other GPU-intensive workloads like machine learning, virtual desktop infrastructure (VDI), and media analytics. The platform supports up to four double-wide GPUs with front GPU access. Of the three servers, this is the one most geared toward

Read More »

Five takeaways from Cisco Live EMEA

4. Europe is on the AI clock Each wave of technology creates an opportunity for the different regions of the globe to establish themselves as a leader. The battle for AI supremacy is well underway with the US having a strong foothold. The Middle East is likely to have an

Read More »

Red Hat OpenShift 4.18 expands cloud-native networking

UDN improves the flexibility and segmentation capability of the default layer 3 Kubernetes pod network for VM administrators by enabling custom, isolated-by-default layer 2, layer 3, and localnet network segments, Lim explained. The segment can act as either primary or secondary networks for container pods and VMs.  Lim noted that

Read More »

AI, automation spur efforts to upskill network pros

SASE, ZTNA shape security skills As networking and security technologies converge, advanced security skills are critical to address cybersecurity challenges within network infrastructures and organizations are requiring networking professionals to have a deeper understanding of security concepts and be able to take on security-focused roles. “There are organizations that are

Read More »

BP boss: ‘We went too far too fast’ as firm slashes low carbon investment by $5bn

BP boss Murray Auchincloss admitted the firm went “too far, too fast” in its efforts to cut its oil and gas business and shift instead to renewable energy, as he unveiled plans to slash investment in its low carbon efforts by $5 billion (£4bn). Unveiling a long-awaited “reset” of the business, he further admitted “our optimism for a fast transition was misplaced”. Auchincloss ran the finances of the energy giant’s upstream oil and gas business when then-chief executive Bernard Looney unveiled ambitious plans to cut production of hydrocarbons by 40% and ramp up spending on wind farms, solar, hydrogen, and other areas of clean energy in 2020. © Erikka Askeland/DCT MediaBP CEO Murray Auchinlcoss presents ‘reset’ to investors at its capital markets day event. The firm has been under pressure to change tack as both its profits and share price have lagged behind competitors, including Shell. On one side activist fund Elliott Investment Management led the vanguard of shareholder demand for BP to cut costs on expensive low-carbon investment and focus on more profitable oil and gas business instead. But there are also many shareholders, including Scottish Widows, Hargreaves Lansdown and Royal London Asset Management arm, who want to see “progress in aligning capital expenditure with credible low-carbon scenarios”. BP announced $1.5bn hike in hydrocarbon spending Auchincloss said the firm planned to up investment in oil and gas from about $8.5bn a year to $10bn. Hitting back at concerns BP has not been moving fast enough he said: “Since I took over the CEO role at the start of 2024 we have taken deliberate action to significantly refocus the portfolio. A scale and pace of action over the past year is greater than anything I have seen over the past 20 years. This includes in low carbon.” The company also

Read More »

Galp Offers Mopane-3X Update

In a statement posted on its website on Tuesday, Galp said it has successfully drilled, cored, and logged the Mopane-3X well in Petroleum Exploration License 83 (PEL 83), offshore Namibia. The well was spudded on January 2, according to the statement, which highlighted that Mopane-3X is situated 18km away from the Mopane-1X well. Galp noted in the statement that Mopane-3X “targeted two stacked prospects, AVO-10 & AVO-13, and a deeper sand, in the southeast region of the Mopane complex, at c.1,200 m water depth”. “Preliminary data confirm light oil and gas-condensate significant columns across AVO-10, and light oil columns on AVO-13 and on the deeper sand, in high-quality sandstones,” Galp said in the statement. “The reservoirs log measures confirm good porosities, high pressures and high permeabilities. Initial fluid samples show low oil viscosity and minimum CO2 and H2S concentrations. Samples were sent for lab testing,” it added. “Mopane-3X higher than estimated pressures and preliminary results unlock further exploration and appraisal opportunities in the southeast region of Mopane,” it continued. In the statement, Galp said all acquired data will be integrated into the reservoir model and support the planning of potential further activities. Rigzone asked Galp if it had any production estimates for Mopane-3X. The company declined to comment. A BofA Global Research report sent to Rigzone on Wednesday said, “last week Galp confirmed it was working on maturing a first development concept for its Namibian discoveries and its latest exploration result now unlocks a second distinct potential production hub”. “We see the now tangible prospect of a multi-development runway as likely to garner greater industry interest when Galp re-engages in its planned farm-down process,” it added. The BofA Global Research report noted that “Galp’s Mopane discoveries in Namibia matter”, adding that the company’s “80 percent equity stake in a 150,000 barrel

Read More »

What if Bruce Lee had set federal transmission policy?

Vincent Duane is principal at Copper Monarch and Caitlin Shields is a partner at the Denver office of Wilkinson Barker Knauer. Bruce Lee once said, “[i]f you spend too much time thinking about a thing, you’ll never get it done.” While the Dragon may not have had the power grid in mind, his words capture the state of electric transmission policy in the United States today. The way we build out the grid has become too difficult because we are asking regulatory frameworks and planning processes to advance controversial agendas that go well beyond our immediate infrastructure needs. Not since rural electrification under the New Deal has this nation had such an historic opportunity to promote American jobs and economic growth through infrastructure development. Unlike the New Deal era, private capital markets, utilities and large electricity users across the country stand ready and willing — today — to invest in the energy infrastructure our country needs to secure America’s global competitiveness. So, what’s the hold up? RMI, echoed in a recent complaint by the Industrial Energy Consumers of America at the Federal Energy Regulatory Commission, claims existing transmission planning processes are selecting the wrong transmission projects, with inadequate focus on larger, regional transmission lines. Even if examples can be found to support this charge, the explanation offered by these organizations doesn’t add up. America’s transmission-owning utilities know how to build needed electric infrastructure. Indeed, that’s their job; it’s how they get paid. And it’s this truth that makes it impossible to accept charges that building regional transmission is somehow not in a utility’s self-interest. It doesn’t make sense for a utility with a business model predicated on making large capital investments, and a legacy in developing a continent-spanning high voltage network, to just stop making these kinds of investments. Instead

Read More »

Sempra announces $56B capital plan amid rapid Texas growth

Dive Brief: Sempra Energy executives unveiled a $56 billion five-year capital plan and their intent to open an early rate case in Texas during the company’s quarterly earnings call on Tuesday. The record capital plan is 16% larger than the company’s prior five-year plan, Sempra chairman and CEO Jeff Martin said. Increased spending, high interest rates, reduced revenues and a series of unfavorable regulatory outcomes in December eroded the company’s fourth-quarter earnings and 2025 projected earnings, triggering a 19% drop in the company’s stock price as analysts questioned the company’s financial plans. Martin acknowledged that the current course would curtail the company’s short-term financial performance, but said the plan was intended to position the company for greater-long term growth, particularly in Texas. Dive Insight: After a rough fourth quarter, Sempra is banking on a “Texas miracle” to restore the company’s financial performance. The company reported 2024 fourth quarter earnings of $665 million, down from $737 million in 2023, and also scaled back its expected 2025 earnings estimates during Tuesday’s call. An unfavorable decision in San Diego Gas & Electric’s latest rate case at the California Public Utilities Commission, issued in December, and a Dec. 31 Federal Energy Regulatory Commission order rejecting a return on equity adder for California Independent System Operator utilities have unexpectedly curtailed the company’s earning potential, Martin said. Delays in some of Sempra Infrastructure’s liquefied natural gas projects, higher interest rates and operating costs, and decreased electrical consumption due to a mild winter also cut into the company’s fourth-quarter earnings, according to Karen Sedgwick, executive vice president and chief financial officer for Sempra. Plans to open an early rate case for subsidiary Oncor in Texas to finance the company’s $56 billion five-year capital plan will also put downward pressure on the company’s earnings potential for the coming year, Martin

Read More »

PSE&G large load pipeline jumps to 4.7 GW as nuclear offtake talks continue: CEO LaRossa

Dive Brief: Initial and advanced interconnection requests from data centers and other large loads jumped to 4.7 GW from 400 MW a year ago at Public Service Electric & Gas, according to Ralph LaRossa, chair, president and CEO of Public Service Enterprise Group, the utility’s parent company. The projects are on average about 100 MW, which can often fit within PSE&G’s “robust” transmission system, LaRossa said Tuesday during a quarterly earnings conference call. PSEG is also in talks with potential data center customers that are interested in buying electricity directly from the Hope Creek and Salem nuclear power plants in southern New Jersey, according to LaRossa. PSEG Power owns 2,483 MW in the power plants.  Dive Insight: PSEG is “constructively positioned” going into 2025, according to Guggenhoim Securities analysts, led by Shahriar Pourreza. “Among utility peers, we believe [PSEG] offers potential earnings upside from data center commercial agreements, higher PJM regional load growth driving transmission investments, a constructive regulatory environment and no need for equity financing during a time of major equity issuance for the utility sector, driven by incremental growth and balance sheet repair,” the analysts said in a note Tuesday. However, the potential large load growth comes amid uncertainty surrounding the future of the PJM Interconnection’s capacity market. “I don’t know if there is a PJM market anymore,” LaRossa said, noting that some states in the grid operator’s footprint are exploring alternative approaches to ensuring they have adequate power supplies. “My concern there is mostly from a reliability standpoint,” LaRossa said. “Are we going to be able in this construct, to attract generation to the PJM region as a whole, and if so, is it going to be in a timely enough fashion?” New Jersey is at a crossroads, according to LaRossa. “We’re all trying to figure out the best

Read More »

Green Marine UK makes seven-figure investment as it eyes offshore wind

Orkney-based Green Marine UK will invest a seven-figure sum in a new subsea services department as it looks to secure a slice of the £270-million UK offshore wind opportunity. The new division will offer various new services, including general visual inspection (GVI), 3D Survey incorporating real-time simultaneous localisation and mapping (SLAM) analysis, marine site characterisation and O&M monitoring with a focus on subsea cables, pipelines & offshore structures. Green Marine UK’s expansion plans include buying subsea technology from companies such as Aberdeen-based company Rovtech. This will see the company purchase Rovtech’s VALOR remotely operated vehicle (ROV), which it recently acquired from Seatronics. In addition, Green Marine will buy technology and equipment from Aberdeen’s Tritech, along with Sonardyne and Digital Edge Subsea from the UK, and international companies Norbit, Voyis and EIVA The firm expects that the department will create three or four full-time jobs at its Stromness office, though this could increase as the department and equipment utilisation grow. Managing director Jason Schofield said: “Green Marine has built a strong track record over many years with particular success in the offshore wind sector. The unique skills and experience we’ve developed during this period have put us in prime position to diversify in line with growing industry demand. “While this entails an initial seven-figure capital investment, the longer-term company strategy is to continue investing and expanding way into the future.” Schofield added that his business will “benefit from a strategic location in Orkney” as it has the second-largest offshore wind installed wind capacity on its doorstep. The company boss said that the move marks a “significant growth opportunity for Green Marine UK and a vehicle to drive jobs and business expansion for many years to come”. Green Marine estimated that the ‘service addressable market’ for subsea O&M services across UK offshore

Read More »

Packaged offerings promise to make AI infrastructure deployment easier

“One of the challenges for AI — for any brand new technology — is putting the right combination of infrastructure together to make the technology work,” says Zeus Kerravala, founder and principal analyst at ZK Research. “If one of those components isn’t on par with the other two, you’re going to be wasting your money.” Time is taking care of the first problem. More and more enterprises are moving from pilot projects to production, and getting a better idea of how much AI capacity they actually need. And vendors are stepping up to handle the second problem, with packaged AI offerings that integrate servers, storage and networking into one convenient package, ready to deploy on-prem or in a colocation facility. All the major vendors, including Cisco, HPE, and Dell are getting in on the action, and Nvidia is rapidly striking deals to get its AI-capable GPUs into as many of these deployments as possible. For example, Cisco and Nvidia just expanded their partnership to bolster AI in the data center.   The vendors said Nvidia will couple Cisco Silicon One technology with Nvidia SuperNICs as part of its Spectrum-X Ethernet networking platform, and Cisco will build systems that combine Nvidia Spectrum silicon with Cisco OS software. That offering is only the latest in a long string of announcements by the two companies. For example, Cisco unveiled its AI Pods in October, which leverage Nvidia GPUs in servers purpose-built for large-scale AI training, as well as the networking and storage required.

Read More »

Cisco, Nvidia expand AI partnership to include Silicon One technology

In addition, Cisco and Nvidia will invest in cross-portfolio technology to tackle common challenges like congestion management and load balancing, ensuring that enterprises can accelerate their AI deployments, Patel stated. The vendors said they would also collaborate to create and validate Nvidia Cloud Partner (NCP) and Enterprise Reference Architectures based on Nvidia Spectrum-X with Cisco Silicon One, Hyperfabric, Nexus, UCS Compute, Optics, and other Cisco technologies. History of Cisco, Nvidia collaborations The announcement is just the latest expansion of the Cisco/Nvidia partnership. The companies have already worked together to make Nvidia’s Tensor Core GPUs available in Cisco’s Unified Computing System (UCS) rack and blade servers, including Cisco UCS X-Series and UCS X-Series Direct, to support AI and data-intensive workloads in the data center and at the edge. The integrated package includes Nvidia AI Enterprise software, which features pretrained models and development tools for production-ready AI. Earlier this month, Cisco said it has shipped the UCS C845A M8 Rack Server for enterprise data center environments. The 8U rack server is built on Nvidia’s HGX platform and designed to deliver the accelerated compute capabilities needed for AI workloads such as LLM training, model fine-tuning, large model inferencing, and retrieval-augmented generation (RAG). The companies are also collaborating on AI Pods, which are preconfigured, validated, and optimized infrastructure packages that customers can plug into their data center or edge environments as needed. The Pods are based on Cisco Validated Design principals, which provide a blueprint for building reliable, scalable, and secure network infrastructures, according to Cisco. The Pods include Nvidia AI Enterprise, which features pretrained models and development tools for production-ready AI, and are managed through Cisco Intersight.

Read More »

3 strategies for carbon-free data centers

Because of the strain that data centers (as well as other electrification sources, such as electric vehicles) are putting on the grid, “the data center industry needs to develop new power supply strategies to support growth plans,” Dietrich said. Here are the underling factors that play into the three strategies outlined by Uptime. Scale creates new opportunities: It’s not just that more data centers are being built, but the data centers under construction are fundamentally different in terms of sheer magnitude. For example, a typical enterprise data center might require between 10 and 25 megawatts of power. Today, the hyperscalers are building data centers in the 250-megawatt range and a large data center campus could require 1,000 megawatts of power. Data centers not only require a reliable source of power, they also require backup power in the form of generators. Dietrich pointed out that if a data center operator builds out enough backup capacity to support 250 megawatts of demand, they’re essentially building a new, on-site power plant. On the one hand, that new power plant requires permitting, it’s costly, and it requires highly training staffers to operate. On the other hand, it provides an opportunity. Instead of letting this asset sit around unused except in an emergency, organizations can leverage these power plants to generate energy that can be sold back to the grid. Dietrich described this arrangement as a win-win that enables the data center to generate revenue, and it helps the utility to gain a new source of power. Realistic expectations: Alternative energy sources like wind and solar, which are dependent on environmental factors, can’t technically or economically supply 100% of data center power, but they can provide a significant percentage of it. Organizations need to temper their expectations, Dietrich said.

Read More »

Questions arise about reasons why Microsoft has cancelled data center lease plans

This, the company said, “allows us to invest and allocate resources to growth areas for our future. Our plans to spend over $80 billion on infrastructure this fiscal year remains on track as we continue to grow at a record pace to meet customer demand.” When asked for his reaction to the findings, John Annand, infrastructure and operations research practice lead at Info-Tech Research Group, pointed to a blog released last month by Microsoft president Brad Smith, and said he thinks the company “is hedging its bets. It reaffirms the $80 billion AI investment guidance in 2025, $40 billion in the US. Why lease when you can build/buy your own?” Over the past four years, he said, Microsoft “has been leasing more data centers than owning. Perhaps they are using the fact that the lessors are behind schedule on providing facilities or the power upgrades required to bring that ratio back into balance. The limiting factor for data centers has always been the availability of power, and this has only become more true with power-hungry AI workloads.” The company, said Annand, “has made very public statements about owning nuclear power plants to help address this demand. If third-party data center operators are finding it tough to provide Microsoft with the power they need, it would make sense that Microsoft vertically integrate its supply chain; so, cancel leases or statements of qualification in favor of investing in the building of their own capacity.” However, Gartner analyst Tony Harvey said of the report, “so much of this is still speculation.” Microsoft, he added, “has not stated as yet that they are reducing their capex spend, and there are reports that Microsoft have strongly refuted that they are making changes to their data center strategy.” The company, he said, “like any other hyperscaler,

Read More »

Quantum Computing Advancements Leap Forward In Evolving Data Center and AI Landscape

Overcoming the Barriers to Quantum Adoption Despite the promise of quantum computing, widespread deployment faces multiple hurdles: High Capital Costs: Quantum computing infrastructure requires substantial investment, with uncertain return-on-investment models. The partnership will explore cost-sharing strategies to mitigate risk. Undefined Revenue Models: Business frameworks for quantum services, including pricing structures and access models, remain in development. Hardware Limitations: Current quantum processors still struggle with error rates and scalability, requiring advancements in error correction and hybrid computing approaches. Software Maturity: Effective algorithms for leveraging quantum computing’s advantages remain an active area of research, particularly in real-world AI and optimization problems. SoftBank’s strategy includes leveraging its extensive telecom infrastructure and AI expertise to create real-world testing environments for quantum applications. By integrating quantum into existing data center operations, SoftBank aims to position itself at the forefront of the quantum-AI revolution. A Broader Play in Advanced Computing SoftBank’s quantum initiative follows a series of high-profile moves into the next generation of computing infrastructure. The company has been investing heavily in AI data centers, aligning with its “Beyond Carrier” strategy that expands its focus beyond telecommunications. Recent efforts include the development of large-scale AI models tailored to Japan and the enhancement of radio access networks (AI-RAN) through AI-driven optimizations. Internationally, SoftBank has explored data center expansion opportunities beyond Japan, as part of its efforts to support AI, cloud computing, and now quantum applications. The company’s long-term vision suggests that quantum data centers could eventually play a role in supporting AI-driven workloads at scale, offering performance benefits that classical supercomputers cannot achieve. The Road Ahead SoftBank and Quantinuum’s collaboration signals growing momentum for quantum computing in enterprise settings. While quantum remains a long-term bet, integrating QPUs into data center infrastructure represents a forward-looking approach that could redefine high-performance computing in the years to come. With

Read More »

STACK Infrastructure Pushes Aggressive Data Center Expansion and Sustainability Strategy Into 2025

Global data center developer and operator STACK Infrastructure is providing a growing range of digital infrastructure solutions for hyperscalers, cloud service providers, and enterprise clients. Like almost all of the cutting-edge developers in the industry, Stack is maintaining the focus on scalability, reliability, and sustainability while delivering a full range of solutions, including build-to-suit, colocation, and powered shell facilities, with continued development in key global markets. Headquartered in the United States, the company has expanded its presence across North America, Europe, and Asia-Pacific, catering to the increasing demand for high-performance computing, artificial intelligence (AI), and cloud-based workloads. The company is known for its commitment to sustainable growth, leveraging green financing initiatives, energy-efficient designs, and renewable power sources to minimize its environmental impact. Through rapid expansion in technology hubs like Silicon Valley, Northern Virginia, Malaysia, and Loudoun County, the company continues to develop industry benchmarks for innovation and infrastructure resilience. With a customer-centric approach and a robust development pipeline, STACK Infrastructure is shaping the future of digital connectivity and data management in an era of accelerating digital transformation. Significant Developments Across 23 Major Data Center Markets Early in 2024, Stack broke ground on the expansion of their existing 100 MW campus in San Jose, servicing the power constrained Silicon Valley. Stack worked with the city of San Jose to add a 60 MW expansion to their SVY01 data center. While possibly the highest profile of Stack’s developments, due to its location, at that point in time the company had announced significant developments across 23 major data center markets, including:       Stack’s 48 MW Santa Clara data center, featuring immediately available shell space powered by an onsite substation with rare, contracted capacity. Stack’s 56 MW Toronto campus, spanning 19 acres, includes an existing 8 MW data center and 48 MW expansion capacity,

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »