Universal Sentence Encoder (USE)

This is the embedding that they use to train RT-1-X.

Do something like this

import tensorflow_hub as hub
 
# Load language model
embed = hub.load(
    'https://tfhub.dev/google/universal-sentence-encoder-large/5')
instruction = "Move the red block to the left of the green block."
def normalize_task_name(task_name):
 
  replaced = task_name.replace('_', ' ').replace('1f', ' ').replace(
      '4f', ' ').replace('-', ' ').replace('50',
                                           ' ').replace('55',
                                                        ' ').replace('56', ' ')
  return replaced.lstrip(' ').rstrip(' ')
 
natural_language_embedding = embed([normalize_task_name(instruction)])[0]

Adapted from here https://github.com/google-deepmind/open_x_embodiment/blob/main/colabs/Minimal_example_for_running_inference_using_RT_1_X_TF_using_tensorflow_datasets.ipynb