Run LLM inference on Cloud Run with Hugging Face TGIStay organized with collectionsSave and categorize content based on your preferences.
The following example shows how to run a backend service that runs theHugging Face Text Generation Inference (TGI) toolkitusing Llama 3. Hugging Face TGI is an open Large Language Models (LLMs), and can be deployed and served on Cloud Run service with GPUs enabled.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-12-15 UTC."],[],[]]