Two men curling in blurry motion photo

Enlarge / Curling, like the cURL project, requires precision and is underappreciated.

When you first start messing with the command line, it can feel like there’s an impermeable wall between the local space you’re messing around in and the greater Internet. On your side, you’ve got your commands and files, and beyond the wall, there are servers, images, APIs, webpages, and more bits of useful, ever-changing data. One of the most popular ways through that wall has been cURL, or “client URL,” which turns 25 this month.

The cURL tool started as a way for programmer Daniel Stenberg to let Internet Chat Relay users quickly fetch currency exchange rates while still inside their chat window. As detailed in an archived history of the project, it was originally built off an existing command-line tool, httpget, built by Rafael Sagula. A 1.0 version was released in 1997, then changed names to urlget by 2.0, as it had added in GOPHER, FTP, and other protocols. By 1998, the tool could upload as well as download, and so version 4.0 was named cURL.

Over the next few years, cURL grew to encompass nearly every Internet protocol, work with certificates and encryption, offer bindings for more than 50 languages, and be included in most Linux distributions and other systems. The cURL project now encompasses both the command-line command itself and the libcurl library. In 2020, the project’s history estimated the command and library had been installed in more than 10 billion instances worldwide.

Read 2 remaining paragraphs | Comments