NGINX CONF 2018: Solarflare, a pioneer in the development of neural-class networks, today announced networking solutions for cloud service providers which are designed to eliminate the performance penalty of operating system overhead. The essence of the solution is a microkernel architecture which keeps the busy kernel as small as possible by onloading networking services to lightning-fast user space of main memory without modification to applications. This week at NGINX Conf 2018, Solarflare is demonstrating how NGINX Plus, equipped with Solarflares Onload kernel bypass software running in user space and XtremeScale NICs, support four times more user requests for web content.
The Solarflare solutions for microkernel architectures allow Internet Service Providers (ISPs) to transform their software load balancers into revenue-producing infrastructures. ISPs support thousands of high-traffic websites, each serving up to millions of concurrent requests from users. With Onload user space networking, IT organizations can now deploy more efficient software load balancers, each supporting far more requests, and use the savings to invest in revenue-producing app and web servers.
Were pleased to work with Solarflare to help customers meet the challenges of operating their applications at a high scale, said Paul Oh, Head of Business Development at NGINX. Solarflare kernel bypass allows NGINX users to take advantage of user space acceleration with fewer requirements to scale or re-architect their applications.
The same Solarflare user space networking that enables billions of instant stock trades, helps NGINX to service billions of user requests for web content and applications, said Ahmet Houssein, Vice President of Marketing and Strategic Development at Solarflare. NGINX users can now deploy software load balancers to do more work on the same number of less expensive systems.
Solarflare will be demonstrating the business benefits of Onload kernel bypass and recruiting members to their User Space Force at NGINX Conf 2018, at the Loews Atlanta. Solarflare will also deliver a breakout session titled Turbocharge Your NGINX Deployment today at 4:05 p.m.
The Solarflare Networking Solution for Microkernel Architectures
The keys to unlocking productivity from software load balancers are acceleration solutions consisting of Solarflare Onload kernel bypass software running in main memory user space, and XtremeScale 10/25/40/50/100Gb Ethernet NICs for Linux. The solution, which eliminates the overhead penalty of operating systems, is proven in the electronic trading industry where nine of ten stock exchanges use Solarflare kernel bypass technology and NICs.
Onload provides an industry-standard, POSIX-compliant Ethernet TCP/IP socket interface to applications, avoiding the need for application modification that has historically been necessary with high-performance networking stacks.
By operating in user space and bypassing the kernel, Onload increases the packet rate of Solarflare NICs, freeing up CPU cycles for additional user requests.
Solarflare is pioneering server connectivity for neural-class networks. From silicon to firmware to software, Solarflare provides a comprehensive, integrated set of technologies for distributed, ultra-scale, software-defined datacenters.
The Solarflare XtremeScale„¢ Architecture is a design framework which includes a comprehensive suite of features for ultra-scale environments: High-bandwidth, ultra-low-latency, ultra-scale connectivity, software defined, secure with hardware firewalls, and instrumented for line-speed telemetry.
Solarflare solutions have earned a sterling reputation in financial services and are used by virtually every major global exchange, commercial bank and hedge fund. This exacting, regulated performance uniquely qualifies our solutions for use in ultra-scale applications in IoT, big data and artificial intelligence where low latency, robust security and insightful telemetrics are critical.
Lisa Briggs, 949.255.2521