robocasa.environments.kitchen package

Subpackages

Submodules

robocasa.environments.kitchen.kitchen module

class robocasa.environments.kitchen.kitchen.Kitchen(robots, env_configuration='default', controller_configs=None, gripper_types='default', base_types='default', initialization_noise='default', use_camera_obs=True, use_object_obs=True, reward_scale=1.0, reward_shaping=False, placement_initializer=None, has_renderer=False, has_offscreen_renderer=True, render_camera='robot0_agentview_center', render_collision_mesh=False, render_visual_mesh=True, render_gpu_device_id=-1, control_freq=20, horizon=1000, ignore_done=True, hard_reset=True, camera_names='agentview', camera_heights=256, camera_widths=256, camera_depths=False, renderer='mujoco', renderer_config=None, init_robot_base_pos=None, seed=None, layout_and_style_ids=None, layout_ids=None, style_ids=None, scene_split=None, generative_textures=None, obj_registries=('objaverse',), obj_instance_split=None, use_distractors=False, translucent_robot=False, randomize_cameras=False)

Bases: ManipulationEnv

Initialized a Base Kitchen environment.

Parameters:
  • robots – Specification for specific robot arm(s) to be instantiated within this env (e.g: “Sawyer” would generate one arm; [“Panda”, “Panda”, “Sawyer”] would generate three robot arms)

  • env_configuration (str) – Specifies how to position the robot(s) within the environment. Default is “default”, which should be interpreted accordingly by any subclasses.

  • controller_configs (str or list of dict) – If set, contains relevant controller parameters for creating a custom controller. Else, uses the default controller for this specific task. Should either be single dict if same controller is to be used for all robots or else it should be a list of the same length as “robots” param

  • base_types (None or str or list of str) – type of base, used to instantiate base models from base factory. Default is “default”, which is the default base associated with the robot(s) the ‘robots’ specification. None results in no base, and any other (valid) model overrides the default base. Should either be single str if same base type is to be used for all robots or else it should be a list of the same length as “robots” param

  • gripper_types (None or str or list of str) – type of gripper, used to instantiate gripper models from gripper factory. Default is “default”, which is the default grippers(s) associated with the robot(s) the ‘robots’ specification. None removes the gripper, and any other (valid) model overrides the default gripper. Should either be single str if same gripper type is to be used for all robots or else it should be a list of the same length as “robots” param

  • initialization_noise (dict or list of dict) –

    Dict containing the initialization noise parameters. The expected keys and corresponding value types are specified below:

    ’magnitude’:

    The scale factor of uni-variate random noise applied to each of a robot’s given initial joint positions. Setting this value to None or 0.0 results in no noise being applied. If “gaussian” type of noise is applied then this magnitude scales the standard deviation applied, If “uniform” type of noise is applied then this magnitude sets the bounds of the sampling range

    ’type’:

    Type of noise to apply. Can either specify “gaussian” or “uniform”

    Should either be single dict if same noise value is to be used for all robots or else it should be a list of the same length as “robots” param

    Note:

    Specifying “default” will automatically use the default noise settings. Specifying None will automatically create the required dict with “magnitude” set to 0.0.

  • use_camera_obs (bool) – if True, every observation includes rendered image(s)

  • placement_initializer (ObjectPositionSampler) – if provided, will be used to place objects on every reset, else a UniformRandomSampler is used by default.

  • has_renderer (bool) – If true, render the simulation state in a viewer instead of headless mode.

  • has_offscreen_renderer (bool) – True if using off-screen rendering

  • render_camera (str) – Name of camera to render if has_renderer is True. Setting this value to ‘None’ will result in the default angle being applied, which is useful as it can be dragged / panned by the user using the mouse

  • render_collision_mesh (bool) – True if rendering collision meshes in camera. False otherwise.

  • render_visual_mesh (bool) – True if rendering visual meshes in camera. False otherwise.

  • render_gpu_device_id (int) – corresponds to the GPU device id to use for offscreen rendering. Defaults to -1, in which case the device will be inferred from environment variables (GPUS or CUDA_VISIBLE_DEVICES).

  • control_freq (float) – how many control signals to receive in every second. This sets the abase of simulation time that passes between every action input.

  • horizon (int) – Every episode lasts for exactly @horizon timesteps.

  • ignore_done (bool) – True if never terminating the environment (ignore @horizon).

  • hard_reset (bool) – If True, re-loads model, sim, and render object upon a reset call, else, only calls sim.reset and resets all robosuite-internal variables

  • camera_names (str or list of str) –

    name of camera to be rendered. Should either be single str if same name is to be used for all cameras’ rendering or else it should be a list of cameras to render.

    Note:

    At least one camera must be specified if @use_camera_obs is True.

    Note:

    To render all robots’ cameras of a certain type (e.g.: “robotview” or “eye_in_hand”), use the convention “all-{name}” (e.g.: “all-robotview”) to automatically render all camera images from each robot’s camera list).

  • camera_heights (int or list of int) – height of camera frame. Should either be single int if same height is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_widths (int or list of int) – width of camera frame. Should either be single int if same width is to be used for all cameras’ frames or else it should be a list of the same length as “camera names” param.

  • camera_depths (bool or list of bool) – True if rendering RGB-D, and RGB otherwise. Should either be single bool if same depth setting is to be used for all cameras or else it should be a list of the same length as “camera names” param.

  • renderer (str) – Specifies which renderer to use.

  • renderer_config (dict) – dictionary for the renderer configurations

  • init_robot_base_pos (str) – name of the fixture to place the near. If None, will randomly select a fixture.

  • seed (int) – environment seed. Default is None, where environment is unseeded, ie. random

  • layout_and_style_ids (list of list of int) – list of layout and style ids to use for the kitchen.

  • layout_ids ((list of) LayoutType or int) – layout id(s) to use for the kitchen. -1 and None specify all layouts -2 specifies layouts not involving islands/wall stacks, -3 specifies layouts involving islands/wall stacks, -4 specifies layouts with dining areas.

  • style_ids ((list of) StyleType or int) – style id(s) to use for the kitchen. -1 and None specify all styles.

  • generative_textures (str) – if set to “100p”, will use AI generated textures

  • obj_registries (tuple of str) – tuple containing the object registries to use for sampling objects. can contain “objaverse” and/or “aigen” to sample objects from objaverse, AI generated, or both.

  • obj_instance_split (str) – string for specifying a custom set of object instances to use. “A” specifies all but the last 3 object instances (or the first half - whichever is larger), “B” specifies the rest, and None specifies all.

  • use_distractors (bool) – if True, will add distractor objects to the scene

  • translucent_robot (bool) – if True, will make the robot appear translucent during rendering

  • randomize_cameras (bool) – if True, will add gaussian noise to the position and rotation of the wrist and agentview cameras

EXCLUDE_LAYOUTS = []
compute_robot_base_placement_pose(ref_fixture, offset=None)

steps: 1. find the nearest counter to this fixture 2. compute offset relative to this counter 3. transform offset to global coordinates

Parameters:
  • ref_fixture (Fixture) – reference fixture to place th robot near

  • offset (list) – offset to add to the base position

convert_rel_to_abs_action(rel_action)
edit_model_xml(xml_str)

This function postprocesses the model.xml collected from a MuJoCo demonstration for retrospective model changes.

Parameters:

xml_str (str) – Mujoco sim demonstration XML file as string

Returns:

Post-processed xml file as string

Return type:

str

find_object_cfg_by_name(name)

Finds and returns the object configuration with the given name.

Parameters:

name (str) – name of the object configuration to find

Returns:

object configuration with the given name

Return type:

dict

get_ep_meta()

Returns a dictionary containing episode meta data

get_fixture(id, ref=None, size=(0.2, 0.2))

search fixture by id (name, object, or type)

Parameters:
  • id (str, Fixture, FixtureType) – id of fixture to search for

  • ref (str, Fixture, FixtureType) – if specified, will search for fixture close to ref (within 0.10m)

  • size (tuple) – if sampling counter, minimum size (x,y) that the counter region must be

Returns:

fixture object

Return type:

Fixture

get_obj_lang(obj_name='obj', get_preposition=False)

gets a formatted language string for the object (replaces underscores with spaces)

Parameters:
  • obj_name (str) – name of object

  • get_preposition (bool) – if True, also returns preposition for object

Returns:

language string for object

Return type:

str

register_fixture_ref(ref_name, fn_kwargs)

Registers a fixture reference for later use. Initializes the fixture if it has not been initialized yet.

Parameters:
  • ref_name (str) – name of the reference

  • fn_kwargs (dict) – keyword arguments to pass to get_fixture

Returns:

fixture object

Return type:

Fixture

reward(action=None)

Reward function for the task. The reward function is based on the task and to be implemented in the subclasses. Returns 0 by default.

Returns:

Reward for the task

Return type:

float

sample_object(groups, exclude_groups=None, graspable=None, microwavable=None, washable=None, cookable=None, freezable=None, split=None, obj_registries=None, max_size=(None, None, None), object_scale=None)

Sample a kitchen object from the specified groups and within max_size bounds.

Parameters:
  • groups (list or str) – groups to sample from or the exact xml path of the object to spawn

  • exclude_groups (str or list) – groups to exclude

  • graspable (bool) – whether the sampled object must be graspable

  • washable (bool) – whether the sampled object must be washable

  • microwavable (bool) – whether the sampled object must be microwavable

  • cookable (bool) – whether whether the sampled object must be cookable

  • freezable (bool) – whether whether the sampled object must be freezable

  • split (str) – split to sample from. Split “A” specifies all but the last 3 object instances (or the first half - whichever is larger), “B” specifies the rest, and None specifies all.

  • obj_registries (tuple) – registries to sample from

  • max_size (tuple) – max size of the object. If the sampled object is not within bounds of max size, function will resample

  • object_scale (float) – scale of the object. If set will multiply the scale of the sampled object by this value

Returns:

kwargs to apply to the MJCF model for the sampled object

dict: info about the sampled object - the path of the mjcf, groups which the object’s category belongs to, the category of the object the sampling split the object came from, and the groups the object was sampled from

Return type:

dict

set_cameras()

Adds new kitchen-relevant cameras to the environment. Will randomize cameras if specified.

update_state()

Updates the state of the environment. This involves updating the state of all fixtures in the environment.

visualize(vis_settings)

In addition to super call, make the robot semi-transparent

Parameters:

vis_settings (dict) – Visualization keywords mapped to T/F, determining whether that specific component should be visualized. Should have “grippers” keyword as well as any other relevant options specified.

class robocasa.environments.kitchen.kitchen.KitchenDemo(init_robot_base_pos='cab_main_main_group', obj_groups='all', num_objs=1, *args, **kwargs)

Bases: Kitchen

class robocasa.environments.kitchen.kitchen.KitchenEnvMeta(name, bases, class_dict)

Bases: EnvMeta

Metaclass for registering robocasa environments

robocasa.environments.kitchen.kitchen.register_kitchen_env(target_class)

Module contents