X

Exploring AI and an Edge Computing Reality Check with OnLogic

The edge computing landscape stands at an intersection of practical necessity and AI transformation. My recent Fireside Chat with Hunter Golden, senior product manager at OnLogic, revealed just how different the reality of what is needed is from the hype. As organizations grapple with deploying AI at the edge, Hunter reveals how smart sizing edge investments will get the best return.

During our discussion, Hunter explained that OnLogic has more than two decades of experience in industrial and edge computing, long before AI became the driving force. OnLogic’s computers have lived running applications behind the scenes in our daily lives — from amusement park kiosks to flight information screens to robots working in warehouses and even harvesting crops. But the onset of AI and an increase in automation opportunities has fundamentally shifted the compute density requirements at the edge while the physical footprint remains largely static.

Hunter emphasized a critical misconception plaguing enterprises looking to deploy AI at the edge: the belief that AI requires massive cloud infrastructure or discrete GPUs. As he explained, “both training and inference can easily occur at the edge” with lower-than-expected compute requirements, noting that even his “not very powerful laptop” could run DeepSeek.

We explored the balance between performance, power, and cost that defines successful edge AI deployment, and how hardware selection and the workload objective are completely intertwined. For example, for computer vision, sizing up the workload includes understanding the number of video streams, resolution requirements, model size, and target frame rates. Once that is understood, organizations can spec appropriate hardware rather than defaulting to expensive, overpowered solutions.

The conversation also highlighted three key advantages of edge AI deployments that can get overlooked in cloud-focused discussions:

Achieving lower latency, with benefits that are immediate and measurable in edge deployments

Maintaining data sovereignty, which is critical in medical applications and other use cases where it’s critical to own your own data

Bypassing network reliability concerns, with edge deployments allowing applications to continue to function even if a network goes down

Hunter’s insights into IT modernization revealed a sector dealing with diverse transformation paths. Some companies are just connecting programmable logic controller (PLC) data to operational technology networks, while others are deploying autonomous mobile robots for material handling. The key is understanding both short-term objectives and long-term roadmaps so you can spec the right hardware and don’t have to rip and replace later on.

Looking toward future infrastructure needs, Hunter underlined the importance of guaranteed lifecycles and scalable architectures. OnLogic’s commitment to five-year lifecycles from launch addresses a common pain point where prototype hardware becomes unavailable by deployment time. The company’s commitment to life cycle transparency when embarking on multi-year projects with customers helps enterprises know they’ll have the right hardware when they get to deployment.

What’s the TechArena take? As organizations like OnLogic continue to balance innovation with practical constraints, we’re witnessing the emergence of edge AI that prioritizes efficiency, reliability, and cost-effectiveness without sacrificing the transformative potential of AI solutions. The real breakthrough is in the thoughtful matching of workload requirements to appropriate infrastructure, supported by partners who understand both the technical challenges and the business realities of edge deployment.

Listen to the full Fireside Chat for more from our conversation. Connect with Hunter Golden on LinkedIn and explore OnLogic’s Ultimate Edge Server Selection Checklist here.

Subscribe to our newsletter.

The edge computing landscape stands at an intersection of practical necessity and AI transformation. My recent Fireside Chat with Hunter Golden, senior product manager at OnLogic, revealed just how different the reality of what is needed is from the hype. As organizations grapple with deploying AI at the edge, Hunter reveals how smart sizing edge investments will get the best return.

During our discussion, Hunter explained that OnLogic has more than two decades of experience in industrial and edge computing, long before AI became the driving force. OnLogic’s computers have lived running applications behind the scenes in our daily lives — from amusement park kiosks to flight information screens to robots working in warehouses and even harvesting crops. But the onset of AI and an increase in automation opportunities has fundamentally shifted the compute density requirements at the edge while the physical footprint remains largely static.

Hunter emphasized a critical misconception plaguing enterprises looking to deploy AI at the edge: the belief that AI requires massive cloud infrastructure or discrete GPUs. As he explained, “both training and inference can easily occur at the edge” with lower-than-expected compute requirements, noting that even his “not very powerful laptop” could run DeepSeek.

We explored the balance between performance, power, and cost that defines successful edge AI deployment, and how hardware selection and the workload objective are completely intertwined. For example, for computer vision, sizing up the workload includes understanding the number of video streams, resolution requirements, model size, and target frame rates. Once that is understood, organizations can spec appropriate hardware rather than defaulting to expensive, overpowered solutions.

The conversation also highlighted three key advantages of edge AI deployments that can get overlooked in cloud-focused discussions:

Achieving lower latency, with benefits that are immediate and measurable in edge deployments

Maintaining data sovereignty, which is critical in medical applications and other use cases where it’s critical to own your own data

Bypassing network reliability concerns, with edge deployments allowing applications to continue to function even if a network goes down

Hunter’s insights into IT modernization revealed a sector dealing with diverse transformation paths. Some companies are just connecting programmable logic controller (PLC) data to operational technology networks, while others are deploying autonomous mobile robots for material handling. The key is understanding both short-term objectives and long-term roadmaps so you can spec the right hardware and don’t have to rip and replace later on.

Looking toward future infrastructure needs, Hunter underlined the importance of guaranteed lifecycles and scalable architectures. OnLogic’s commitment to five-year lifecycles from launch addresses a common pain point where prototype hardware becomes unavailable by deployment time. The company’s commitment to life cycle transparency when embarking on multi-year projects with customers helps enterprises know they’ll have the right hardware when they get to deployment.

What’s the TechArena take? As organizations like OnLogic continue to balance innovation with practical constraints, we’re witnessing the emergence of edge AI that prioritizes efficiency, reliability, and cost-effectiveness without sacrificing the transformative potential of AI solutions. The real breakthrough is in the thoughtful matching of workload requirements to appropriate infrastructure, supported by partners who understand both the technical challenges and the business realities of edge deployment.

Listen to the full Fireside Chat for more from our conversation. Connect with Hunter Golden on LinkedIn and explore OnLogic’s Ultimate Edge Server Selection Checklist here.

Subscribe to our newsletter.

Transcript

Subscribe to TechArena

Subscribe