Transcribe short audio files

This page demonstrates how to transcribe a short audio file to text using synchronous speech recognition.

Synchronous speech recognition returns the recognized text for short audio (less than 60 seconds). To process a speech recognition request for audio longer than 60 seconds, use Asynchronous Speech Recognition .

Audio content can be sent directly to Speech-to-Text from a local file, or Speech-to-Text can process audio content stored in a Google Cloud Storage bucket . See the quotas & limits page for limits on synchronous speech recognition requests.

Perform synchronous speech recognition on a local file

Here is an example of performing synchronous speech recognition on a local audio file:

REST

Refer to the speech:recognize API endpoint for complete details. See the RecognitionConfig reference documentation for more information on configuring the request body.

The audio content supplied in the request body must be base64-encoded. For more information on how to base64-encode audio, see Base64 Encoding Audio Content . For more information on the content field, see RecognitionAudio .

Before using any of the request data, make the following replacements:

  • LANGUAGE_CODE : the BCP-47 code of the language spoken in your audio clip.
  • ENCODING : the encoding of the audio you want to transcribe.
  • SAMPLE_RATE_HERTZ : sample rate in hertz of the audio you want to transcribe.
  • ENABLE_WORD_TIME_OFFSETS : enable this field if you want word start and end time offsets (timestamps) returned.
  • INPUT_AUDIO : a base64-encoded string of the audio data that you want to transcribe.
  • PROJECT_ID : the alphanumeric ID of your Google Cloud project.

HTTP method and URL:

POST https://speech.googleapis.com/v1/speech:recognize

Request JSON body:

{
  "config": {
      "languageCode": " LANGUAGE_CODE 
",
      "encoding": " ENCODING 
",
      "sampleRateHertz": SAMPLE_RATE_HERTZ 
,
      "enableWordTimeOffsets": ENABLE_WORD_TIME_OFFSETS 
},
  "audio": {
    "content": " INPUT_AUDIO 
"
  }
}

To send your request, expand one of these options:

You should receive a JSON response similar to the following:

{
  "results": [
    {
      "alternatives": [
        {
          "transcript": "how old is the Brooklyn Bridge",
          "confidence": 0.98267895
        }
      ]
    }
  ]
}

gcloud

Refer to recognize command for complete details.

To perform speech recognition on a local file, use the Google Cloud CLI, passing in the local filepath of the file to perform speech recognition on.

gcloud  
ml  
speech  
recognize  
 PATH-TO-LOCAL-FILE 
  
--language-code = 
 'en-US' 

If the request is successful, the server returns a response in JSON format:

{
  "results": [
    {
      "alternatives": [
        {
          "confidence": 0.9840146,
          "transcript": "how old is the Brooklyn Bridge"
        }
      ]
    }
  ]
}

Go

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Go API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  func 
  
 recognize 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 file 
  
 string 
 ) 
  
 error 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 speech 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 defer 
  
 client 
 . 
 Close 
 () 
  
 data 
 , 
  
 err 
  
 := 
  
 os 
 . 
 ReadFile 
 ( 
 file 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 // Send the contents of the audio file with the encoding and 
  
 // and sample rate information to be transcripted. 
  
 resp 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Recognize 
 ( 
 ctx 
 , 
  
& speechpb 
 . 
 RecognizeRequest 
 { 
  
 Config 
 : 
  
& speechpb 
 . 
 RecognitionConfig 
 { 
  
 Encoding 
 : 
  
 speechpb 
 . 
 RecognitionConfig_LINEAR16 
 , 
  
 SampleRateHertz 
 : 
  
 16000 
 , 
  
 LanguageCode 
 : 
  
 "en-US" 
 , 
  
 }, 
  
 Audio 
 : 
  
& speechpb 
 . 
 RecognitionAudio 
 { 
  
 AudioSource 
 : 
  
& speechpb 
 . 
 RecognitionAudio_Content 
 { 
 Content 
 : 
  
 data 
 }, 
  
 }, 
  
 }) 
  
 // Print the results. 
  
 for 
  
 _ 
 , 
  
 result 
  
 := 
  
 range 
  
 resp 
 . 
 Results 
  
 { 
  
 for 
  
 _ 
 , 
  
 alt 
  
 := 
  
 range 
  
 result 
 . 
 Alternatives 
  
 { 
  
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
  
 "\"%v\" (confidence=%3f)\n" 
 , 
  
 alt 
 . 
 Transcript 
 , 
  
 alt 
 . 
 Confidence 
 ) 
  
 } 
  
 } 
  
 return 
  
 nil 
 } 
 

Java

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Java API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  /** 
 * Performs speech recognition on raw PCM audio and prints the transcription. 
 * 
 * @param fileName the path to a PCM audio file to transcribe. 
 */ 
 public 
  
 static 
  
 void 
  
 syncRecognizeFile 
 ( 
 String 
  
 fileName 
 ) 
  
 throws 
  
 Exception 
  
 { 
  
 try 
  
 ( 
 SpeechClient 
  
 speech 
  
 = 
  
 SpeechClient 
 . 
 create 
 ()) 
  
 { 
  
 Path 
  
 path 
  
 = 
  
 Paths 
 . 
 get 
 ( 
 fileName 
 ); 
  
 byte 
 [] 
  
 data 
  
 = 
  
 Files 
 . 
 readAllBytes 
 ( 
 path 
 ); 
  
 ByteString 
  
 audioBytes 
  
 = 
  
 ByteString 
 . 
 copyFrom 
 ( 
 data 
 ); 
  
 // Configure request with local raw PCM audio 
  
 RecognitionConfig 
  
 config 
  
 = 
  
 RecognitionConfig 
 . 
 newBuilder 
 () 
  
 . 
 setEncoding 
 ( 
 AudioEncoding 
 . 
 LINEAR16 
 ) 
  
 . 
 setLanguageCode 
 ( 
 "en-US" 
 ) 
  
 . 
 setSampleRateHertz 
 ( 
 16000 
 ) 
  
 . 
 build 
 (); 
  
 RecognitionAudio 
  
 audio 
  
 = 
  
 RecognitionAudio 
 . 
 newBuilder 
 (). 
 setContent 
 ( 
 audioBytes 
 ). 
 build 
 (); 
  
 // Use blocking call to get audio transcript 
  
 RecognizeResponse 
  
 response 
  
 = 
  
 speech 
 . 
 recognize 
 ( 
 config 
 , 
  
 audio 
 ); 
  
 List<SpeechRecognitionResult> 
  
 results 
  
 = 
  
 response 
 . 
 getResultsList 
 (); 
  
 for 
  
 ( 
 SpeechRecognitionResult 
  
 result 
  
 : 
  
 results 
 ) 
  
 { 
  
 // There can be several alternative transcripts for a given chunk of speech. Just use the 
  
 // first (most likely) one here. 
  
 SpeechRecognitionAlternative 
  
 alternative 
  
 = 
  
 result 
 . 
 getAlternativesList 
 (). 
 get 
 ( 
 0 
 ); 
  
 System 
 . 
 out 
 . 
 printf 
 ( 
 "Transcription: %s%n" 
 , 
  
 alternative 
 . 
 getTranscript 
 ()); 
  
 } 
  
 } 
 } 
 

Node.js

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Node.js API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // Imports the Google Cloud client library 
 const 
  
 fs 
  
 = 
  
 require 
 ( 
 'fs' 
 ); 
 const 
  
 speech 
  
 = 
  
 require 
 ( 
 ' @google-cloud/speech 
' 
 ); 
 // Creates a client 
 const 
  
 client 
  
 = 
  
 new 
  
 speech 
 . 
  SpeechClient 
 
 (); 
 /** 
 * TODO(developer): Uncomment the following lines before running the sample. 
 */ 
 // const filename = 'Local path to audio file, e.g. /path/to/audio.raw'; 
 // const encoding = 'Encoding of the audio file, e.g. LINEAR16'; 
 // const sampleRateHertz = 16000; 
 // const languageCode = 'BCP-47 language code, e.g. en-US'; 
 const 
  
 config 
  
 = 
  
 { 
  
 encoding 
 : 
  
 encoding 
 , 
  
 sampleRateHertz 
 : 
  
 sampleRateHertz 
 , 
  
 languageCode 
 : 
  
 languageCode 
 , 
 }; 
 const 
  
 audio 
  
 = 
  
 { 
  
 content 
 : 
  
 fs 
 . 
 readFileSync 
 ( 
 filename 
 ). 
 toString 
 ( 
 'base64' 
 ), 
 }; 
 const 
  
 request 
  
 = 
  
 { 
  
 config 
 : 
  
 config 
 , 
  
 audio 
 : 
  
 audio 
 , 
 }; 
 // Detects speech in the audio file 
 const 
  
 [ 
 response 
 ] 
  
 = 
  
 await 
  
 client 
 . 
 recognize 
 ( 
 request 
 ); 
 const 
  
 transcription 
  
 = 
  
 response 
 . 
 results 
  
 . 
 map 
 ( 
 result 
  
 = 
>  
 result 
 . 
 alternatives 
 [ 
 0 
 ]. 
 transcript 
 ) 
  
 . 
 join 
 ( 
 '\n' 
 ); 
 console 
 . 
 log 
 ( 
 'Transcription: ' 
 , 
  
 transcription 
 ); 
 

Python

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Python API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  from 
  
 google.cloud 
  
 import 
 speech 
 def 
  
 transcribe_file 
 ( 
 audio_file 
 : 
 str 
 ) 
 - 
> speech 
 . 
 RecognizeResponse 
 : 
  
 """Transcribe the given audio file. 
 Args: 
 audio_file (str): Path to the local audio file to be transcribed. 
 Example: "resources/audio.wav" 
 Returns: 
 cloud_speech.RecognizeResponse: The response containing the transcription results 
 """ 
 client 
 = 
 speech 
 . 
 SpeechClient 
 () 
 with 
 open 
 ( 
 audio_file 
 , 
 "rb" 
 ) 
 as 
 f 
 : 
 audio_content 
 = 
 f 
 . 
 read 
 () 
 audio 
 = 
 speech 
 . 
 RecognitionAudio 
 ( 
 content 
 = 
 audio_content 
 ) 
 config 
 = 
 speech 
 . 
 RecognitionConfig 
 ( 
 encoding 
 = 
 speech 
 . 
 RecognitionConfig 
 . 
 AudioEncoding 
 . 
 LINEAR16 
 , 
 sample_rate_hertz 
 = 
 16000 
 , 
 language_code 
 = 
 "en-US" 
 , 
 ) 
 response 
 = 
 client 
 . 
 recognize 
 ( 
 config 
 = 
 config 
 , 
 audio 
 = 
 audio 
 ) 
 # Each result is for a consecutive portion of the audio. Iterate through 
 # them to get the transcripts for the entire audio file. 
 for 
 result 
 in 
 response 
 . 
 results 
 : 
 # The first alternative is the most likely one for this portion. 
 print 
 ( 
 f 
 "Transcript: 
 { 
 result 
 . 
 alternatives 
 [ 
 0 
 ] 
 . 
 transcript 
 } 
 " 
 ) 
 return 
 response 
 

Additional languages

C#: Please follow the C# setup instructions on the client libraries page and then visit the Speech-to-Text reference documentation for .NET.

PHP: Please follow the PHP setup instructions on the client libraries page and then visit the Speech-to-Text reference documentation for PHP.

Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the Speech-to-Text reference documentation for Ruby.

Perform synchronous speech recognition on a remote file

For your convenience, Speech-to-Text API can perform synchronous speech recognition directly on an audio file located in Google Cloud Storage, without the need to send the contents of the audio file in the body of your request.

Here is an example of performing synchronous speech recognition on a file located in Cloud Storage:

REST

Refer to the speech:recognize API endpoint for complete details. See the RecognitionConfig reference documentation for more information on configuring the request body.

The audio content supplied in the request body must be base64-encoded. For more information on how to base64-encode audio, see Base64 Encoding Audio Content . For more information on the content field, see RecognitionAudio .

Before using any of the request data, make the following replacements:

  • LANGUAGE_CODE : the BCP-47 code of the language spoken in your audio clip.
  • ENCODING : the encoding of the audio you want to transcribe.
  • SAMPLE_RATE_HERTZ : sample rate in Hertz of the audio you want to transcribe.
  • ENABLE_WORD_TIME_OFFSETS : enable this field if you want word start and end time offsets (timestamps) returned.
  • STORAGE_BUCKET : a Cloud Storage bucket.
  • INPUT_AUDIO : the audio data file that you want to transcribe.
  • PROJECT_ID : the alphanumeric ID of your Google Cloud project.

HTTP method and URL:

POST https://speech.googleapis.com/v1/speech:recognize

Request JSON body:

{
  "config": {
      "languageCode": " LANGUAGE_CODE 
",
      "encoding": " ENCODING 
",
      "sampleRateHertz": SAMPLE_RATE_HERTZ 
,
      "enableWordTimeOffsets": ENABLE_WORD_TIME_OFFSETS 
},
  "audio": {
    "uri": "gs:// STORAGE_BUCKET 
/ INPUT_AUDIO 
"
  }
}

To send your request, expand one of these options:

You should receive a JSON response similar to the following:

{
  "results": [
    {
      "alternatives": [
        {
          "transcript": "how old is the Brooklyn Bridge",
          "confidence": 0.98267895
        }
      ]
    }
  ]
}

gcloud

Refer to recognize command for complete details.

To perform speech recognition on a local file, use the Google Cloud CLI, passing in the local filepath of the file to perform speech recognition on.

gcloud  
ml  
speech  
recognize  
 'gs://cloud-samples-tests/speech/brooklyn.flac' 
  
 \ 
--language-code = 
 'en-US' 

If the request is successful, the server returns a response in JSON format:

{
  "results": [
    {
      "alternatives": [
        {
          "confidence": 0.9840146,
          "transcript": "how old is the Brooklyn Bridge"
        }
      ]
    }
  ]
}

Go

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Go API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  func 
  
 recognizeGCS 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 gcsURI 
  
 string 
 ) 
  
 error 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 speech 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 defer 
  
 client 
 . 
 Close 
 () 
  
 // Send the request with the URI (gs://...) 
  
 // and sample rate information to be transcripted. 
  
 resp 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Recognize 
 ( 
 ctx 
 , 
  
& speechpb 
 . 
 RecognizeRequest 
 { 
  
 Config 
 : 
  
& speechpb 
 . 
 RecognitionConfig 
 { 
  
 Encoding 
 : 
  
 speechpb 
 . 
 RecognitionConfig_LINEAR16 
 , 
  
 SampleRateHertz 
 : 
  
 16000 
 , 
  
 LanguageCode 
 : 
  
 "en-US" 
 , 
  
 }, 
  
 Audio 
 : 
  
& speechpb 
 . 
 RecognitionAudio 
 { 
  
 AudioSource 
 : 
  
& speechpb 
 . 
 RecognitionAudio_Uri 
 { 
 Uri 
 : 
  
 gcsURI 
 }, 
  
 }, 
  
 }) 
  
 // Print the results. 
  
 for 
  
 _ 
 , 
  
 result 
  
 := 
  
 range 
  
 resp 
 . 
 Results 
  
 { 
  
 for 
  
 _ 
 , 
  
 alt 
  
 := 
  
 range 
  
 result 
 . 
 Alternatives 
  
 { 
  
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
  
 "\"%v\" (confidence=%3f)\n" 
 , 
  
 alt 
 . 
 Transcript 
 , 
  
 alt 
 . 
 Confidence 
 ) 
  
 } 
  
 } 
  
 return 
  
 nil 
 } 
 

Java

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Java API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  /** 
 * Performs speech recognition on remote FLAC file and prints the transcription. 
 * 
 * @param gcsUri the path to the remote FLAC audio file to transcribe. 
 */ 
 public 
  
 static 
  
 void 
  
 syncRecognizeGcs 
 ( 
 String 
  
 gcsUri 
 ) 
  
 throws 
  
 Exception 
  
 { 
  
 // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS 
  
 try 
  
 ( 
 SpeechClient 
  
 speech 
  
 = 
  
 SpeechClient 
 . 
 create 
 ()) 
  
 { 
  
 // Builds the request for remote FLAC file 
  
 RecognitionConfig 
  
 config 
  
 = 
  
 RecognitionConfig 
 . 
 newBuilder 
 () 
  
 . 
 setEncoding 
 ( 
 AudioEncoding 
 . 
 FLAC 
 ) 
  
 . 
 setLanguageCode 
 ( 
 "en-US" 
 ) 
  
 . 
 setSampleRateHertz 
 ( 
 16000 
 ) 
  
 . 
 build 
 (); 
  
 RecognitionAudio 
  
 audio 
  
 = 
  
 RecognitionAudio 
 . 
 newBuilder 
 (). 
 setUri 
 ( 
 gcsUri 
 ). 
 build 
 (); 
  
 // Use blocking call for getting audio transcript 
  
 RecognizeResponse 
  
 response 
  
 = 
  
 speech 
 . 
 recognize 
 ( 
 config 
 , 
  
 audio 
 ); 
  
 List<SpeechRecognitionResult> 
  
 results 
  
 = 
  
 response 
 . 
 getResultsList 
 (); 
  
 for 
  
 ( 
 SpeechRecognitionResult 
  
 result 
  
 : 
  
 results 
 ) 
  
 { 
  
 // There can be several alternative transcripts for a given chunk of speech. Just use the 
  
 // first (most likely) one here. 
  
 SpeechRecognitionAlternative 
  
 alternative 
  
 = 
  
 result 
 . 
 getAlternativesList 
 (). 
 get 
 ( 
 0 
 ); 
  
 System 
 . 
 out 
 . 
 printf 
 ( 
 "Transcription: %s%n" 
 , 
  
 alternative 
 . 
 getTranscript 
 ()); 
  
 } 
  
 } 
 } 
 

Node.js

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Node.js API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // Imports the Google Cloud client library 
 const 
  
 speech 
  
 = 
  
 require 
 ( 
 ' @google-cloud/speech 
' 
 ); 
 // Creates a client 
 const 
  
 client 
  
 = 
  
 new 
  
 speech 
 . 
  SpeechClient 
 
 (); 
 /** 
 * TODO(developer): Uncomment the following lines before running the sample. 
 */ 
 // const gcsUri = 'gs://my-bucket/audio.raw'; 
 // const encoding = 'Encoding of the audio file, e.g. LINEAR16'; 
 // const sampleRateHertz = 16000; 
 // const languageCode = 'BCP-47 language code, e.g. en-US'; 
 const 
  
 config 
  
 = 
  
 { 
  
 encoding 
 : 
  
 encoding 
 , 
  
 sampleRateHertz 
 : 
  
 sampleRateHertz 
 , 
  
 languageCode 
 : 
  
 languageCode 
 , 
 }; 
 const 
  
 audio 
  
 = 
  
 { 
  
 uri 
 : 
  
 gcsUri 
 , 
 }; 
 const 
  
 request 
  
 = 
  
 { 
  
 config 
 : 
  
 config 
 , 
  
 audio 
 : 
  
 audio 
 , 
 }; 
 // Detects speech in the audio file 
 const 
  
 [ 
 response 
 ] 
  
 = 
  
 await 
  
 client 
 . 
 recognize 
 ( 
 request 
 ); 
 const 
  
 transcription 
  
 = 
  
 response 
 . 
 results 
  
 . 
 map 
 ( 
 result 
  
 = 
>  
 result 
 . 
 alternatives 
 [ 
 0 
 ]. 
 transcript 
 ) 
  
 . 
 join 
 ( 
 '\n' 
 ); 
 console 
 . 
 log 
 ( 
 'Transcription: ' 
 , 
  
 transcription 
 ); 
 

Python

To learn how to install and use the client library for Speech-to-Text, see Speech-to-Text client libraries . For more information, see the Speech-to-Text Python API reference documentation .

To authenticate to Speech-to-Text, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  def 
  
 transcribe_gcs 
 ( 
 audio_uri 
 : 
 str 
 ) 
 - 
> speech 
 . 
 RecognizeResponse 
 : 
  
 """Transcribes the audio file specified by the gcs_uri. 
 Args: 
 audio_uri (str): The Google Cloud Storage URI of the input audio file. 
 E.g., gs://cloud-samples-data/speech/audio.flac 
 Returns: 
 cloud_speech.RecognizeResponse: The response containing the transcription results 
 """ 
 from 
  
 google.cloud 
  
 import 
 speech 
 client 
 = 
 speech 
 . 
 SpeechClient 
 () 
 audio 
 = 
 speech 
 . 
 RecognitionAudio 
 ( 
 uri 
 = 
 audio_uri 
 ) 
 config 
 = 
 speech 
 . 
 RecognitionConfig 
 ( 
 encoding 
 = 
 speech 
 . 
 RecognitionConfig 
 . 
 AudioEncoding 
 . 
 FLAC 
 , 
 sample_rate_hertz 
 = 
 16000 
 , 
 language_code 
 = 
 "en-US" 
 , 
 ) 
 response 
 = 
 client 
 . 
 recognize 
 ( 
 config 
 = 
 config 
 , 
 audio 
 = 
 audio 
 ) 
 # Each result is for a consecutive portion of the audio. Iterate through 
 # them to get the transcripts for the entire audio file. 
 for 
 result 
 in 
 response 
 . 
 results 
 : 
 # The first alternative is the most likely one for this portion. 
 print 
 ( 
 f 
 "Transcript: 
 { 
 result 
 . 
 alternatives 
 [ 
 0 
 ] 
 . 
 transcript 
 } 
 " 
 ) 
 return 
 response 
 

Additional languages

C#: Please follow the C# setup instructions on the client libraries page and then visit the Speech-to-Text reference documentation for .NET.

PHP: Please follow the PHP setup instructions on the client libraries page and then visit the Speech-to-Text reference documentation for PHP.

Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the Speech-to-Text reference documentation for Ruby.

Create a Mobile Website
View Site in Mobile | Classic
Share by: