跳到主要内容

amass_action_converter

Module: GBC.utils.data_preparation.amass_action_converter

This module implements the comprehensive data post-processing pipeline that transforms raw AMASS motion capture data into the GBC Standard Data Format - a unified data specification that enables consistent human-to-robot motion retargeting across different humanoid robot configurations. The module performs critical data quality enhancements including motion filtering, quaternion continuity repair, coordinate system adjustments, velocity computations, foot contact analysis, and reference height corrections.

Data Processing Pipeline

🎯 GBC Standard Data Format

The GBC Standard Data Format represents one of the key contributions of this project, providing a unified specification for humanoid robot motion data that ensures compatibility across different robot configurations and enables robust training pipelines.

📋 Standard Format Specification

The GBC format includes the following standardized fields:

{
# Core Motion Data
"poses": torch.Tensor, # [T, 66] - SMPL+H body poses (angle-axis)
"trans": torch.Tensor, # [T, 3] - Root position (world coordinates)
"root_orient": torch.Tensor, # [T, 3] - Root orientation (angle-axis)
"actions": torch.Tensor, # [T, N] - Robot joint actions

# Kinematic Data
"lin_vel": torch.Tensor, # [T, 3] - Linear velocity (robot frame)
"lin_vel_orig": torch.Tensor, # [T, 3] - Linear velocity (world frame)
"ang_vel": torch.Tensor, # [T, 3] - Angular velocity (robot frame)
"ang_vel_world": torch.Tensor, # [T, 3] - Angular velocity (world frame)
"actions_vel": torch.Tensor, # [T, N] - Joint velocities

# Contact and Environment
"project_gravity": torch.Tensor, # [T, 3] - Gravity vector in robot frame
"foot_contact": torch.Tensor, # [T, 2] - Foot contact states

# Metadata
"fps": float, # Frame rate
"title": str, # Motion sequence name
"fpath": str # Original file path
}

📚 Dependencies

This module requires the following Python packages:

  • torch - PyTorch framework for tensor operations
  • numpy - Numerical computing library
  • torchaudio_filters - Advanced signal filtering (LowPass filter)
  • scipy - Scientific computing for quaternion operations
  • human_body_prior - SMPL+H body model
  • matplotlib - Visualization for debugging and validation
  • tqdm - Progress bars for batch processing
  • GBC.utils.base.math_utils - Mathematical utilities for robotics
  • GBC.utils.base.rotation_repair - Quaternion continuity repair
  • GBC.utils.data_preparation.pose_transformer - Neural network inference
  • GBC.utils.data_preparation.robot_flip_left_right - Symmetry transformations

🔧 Core Components

📊 TrackingDataPostProcess

Module Name: GBC.utils.data_preparation.amass_action_converter.TrackingDataPostProcess

Definition:

class TrackingDataPostProcess:
def __init__(self, filter_cfg: FilterCfg):
self.cfg = filter_cfg
self.device = filter_cfg.device
self.lowpass = LowPass(
cutoff=filter_cfg.filter_cutoff,
sample_rate=filter_cfg.filter_sample_rate,
order=filter_cfg.filter_order,
).to(self.device)

🔧 Core Functionality: Advanced signal processing and coordinate system transformations for motion data quality enhancement.

🎛️ Motion Filtering

Method Signature:

def filt(self, sequence: torch.Tensor, use_padding: bool = True, window_size: int = 3) -> torch.Tensor

📥 Input Parameters:

  • sequence (torch.Tensor): Input motion sequence [T, D]
  • use_padding (bool): Apply padding to reduce edge artifacts
  • window_size (int): Hampel filter window size for outlier detection

🔧 Processing Pipeline:

# 1. Hampel filter for outlier detection and removal
sequence, has_outlier = hampel_filter(sequence, window_size=window_size)

# 2. Add epsilon to prevent torchaudio_filters zero-value bugs
epsilon = 1e-6
sequence += epsilon

# 3. Apply sophisticated padding for edge artifact reduction
if use_padding:
pad_len = max(self.lowpass.order * 2, 10)
# Forward padding: linear extrapolation
forward_pad = 2 * sequence[:1] - sequence[1:pad_len+1].flip(0)
# Backward padding: linear extrapolation
backward_pad = 2 * sequence[-1:] - sequence[-pad_len-1:-1].flip(0)
padded_sequence = torch.cat([forward_pad, sequence, backward_pad], dim=0)

# 4. Low-pass filtering
filtered_sequence = self.lowpass(padded_sequence.T).T

# 5. Remove padding
if use_padding:
filtered_sequence = filtered_sequence[pad_len:-pad_len]

return filtered_sequence

🌐 Coordinate System Transformations

Root Orientation Adjustment:

def adjust_root_orient(self, root_orient: torch.Tensor) -> torch.Tensor:
"""Transform root orientation to robot coordinate system"""

# 1. Apply coordinate system rotation (X→Y, Y→Z, Z→X)
rot_quat = torch.tensor([0.5, 0.5, 0.5, 0.5], dtype=torch.float32, device=self.device)
coordinate_transform = quaternion_to_angle_axis(rot_quat.repeat(root_orient.shape[0], 1))

# 2. Apply transformation matrix
root_rot_mat = rot_vec_to_mat(root_orient)
transform_mat = rot_vec_to_mat(coordinate_transform).permute(0, 2, 1)
transformed_rot = torch.bmm(root_rot_mat, transform_mat)

# 3. Convert back to angle-axis with continuity repair
root_orient = rot_mat_to_vec(transformed_rot)
root_quat = angle_axis_to_quaternion(root_orient)
root_quat = quat_fix(root_quat) # Ensure quaternion continuity
root_orient = quaternion_to_angle_axis(root_quat)

# 4. Advanced smoothing and continuity repair
root_orient = unwrap_and_smooth_rot_vecs(root_orient, smoothing_window=5)

return root_orient

Initial Pose Normalization:

def transform_for_intial_pose(self, tsl: torch.Tensor, rot_vec: torch.Tensor) -> tuple:
"""Apply global transform so first frame has x=0, y=0, yaw=0"""

# 1. Compute transformation matrices for all frames
tf_mat = self.get_batch_tf_matrix(tsl, rot_vec)

# 2. Create inverse transformation for first frame
inv_tf = torch.eye(4, dtype=tf_mat.dtype, device=tf_mat.device)
inv_tf[:3, :3] = self.get_yaw_only_rot_mat(tf_mat[0, :3, :3].T)
inv_tf[:, 3:] = -inv_tf @ tf_mat[0, :, 3:]
inv_tf[2, 3] = 0 # Preserve Z-translation

# 3. Apply transformation to entire sequence
new_tf_mat = torch.einsum("ij, bjk -> bik", inv_tf, tf_mat)

# 4. Extract normalized translation and rotation
new_trans = new_tf_mat[:, :3, 3]
new_rot_mat = new_tf_mat[:, :3, :3]
new_rot_vec = rot_mat_to_vec(new_rot_mat)

return new_trans, new_rot_vec

⚡ Velocity Computations

Linear Velocity (Robot Frame):

def calc_lin_vel(self, trans: torch.Tensor, orient: torch.Tensor, fps: float) -> torch.Tensor:
"""Calculate linear velocity in robot coordinate frame"""

# 1. Compute world-frame velocity
lin_vel = torch.diff(trans, dim=0) * fps
lin_vel = torch.cat([lin_vel, lin_vel[-1:]], dim=0)

# 2. Transform to robot frame
rot_quat = angle_axis_to_quaternion(orient)
rot_quat = quat_fix(rot_quat) # Ensure continuity
rot_quat = self.filt(rot_quat) # Smooth quaternions
lin_vel = quat_rotate_inverse(rot_quat, lin_vel)

return lin_vel

Angular Velocity (Robot Frame):

def calc_ang_vel(self, rot_vec: torch.Tensor, fps: float) -> torch.Tensor:
"""Calculate angular velocity in robot coordinate frame"""

# 1. Convert to quaternions and ensure continuity
rot_quat = angle_axis_to_quaternion(rot_vec)

# 2. Compute quaternion differences
delta_q = q_mul(rot_quat[1:], quat_inv(rot_quat[:-1]))

# 3. Handle quaternion sign ambiguity
neg_w_mask = delta_q[..., 0] < 0
delta_q[neg_w_mask] *= -1.0

# 4. Convert to axis-angle representation
angle = 2 * torch.acos(torch.clamp(delta_q[..., 0], -1.0, 1.0))
axis = delta_q[..., 1:]
axis_norm = torch.norm(axis, p=2, dim=-1, keepdim=True)
axis = axis / (axis_norm + 1e-8)
rot_diff_vec = axis * angle.unsqueeze(-1)

# 5. Scale by frame rate and transform to robot frame
ang_vel = rot_diff_vec * fps
ang_vel = torch.cat([ang_vel, ang_vel[-1:]], dim=0)
ang_vel = quat_rotate_inverse(rot_quat, ang_vel)

return self.filt(ang_vel)

🏭 AMASSActionConverter

Module Name: GBC.utils.data_preparation.amass_action_converter.AMASSActionConverter

Definition:

class AMASSActionConverter:
def __init__(self, cfg: BaseCfg, **kwargs):
self.cfg = cfg
self.dataset = AMASSDatasetInterpolate.from_cfg(cfg)
self.fk = RobotKinematics(cfg.urdf_path, device=cfg.device)
self.model = PoseTransformer(num_actions=self.fk.num_dofs, load_hands=cfg.load_hands).to(cfg.device)
self.model.load_state_dict(torch.load(cfg.pose_transformer_path, map_location=torch.device(cfg.device)))
self.post_process = TrackingDataPostProcess(filter_cfg=cfg.filter).to(cfg.device)
# Additional initialization...

📥 Initialization Parameters:

  • cfg (BaseCfg): Comprehensive configuration object containing all processing parameters
  • urdf_path: Robot description file path
  • pose_transformer_path: Trained model path for pose-to-action conversion
  • export_path: Output directory for processed data
  • mapping_table: SMPL+H to robot joint correspondence
  • smplh_model_path: Human body model file path
  • filter: Signal processing configuration

🎯 Core Conversion Pipeline

Method Signature:

def convert(self,
foot_names: List[str],
min_subseq_ratio: float = 0.2,
data_correction_path: Optional[str] = None,
data_correction_keys: Optional[Dict[str, str]] = None,
add_flipped_data: bool = False,
floor_contact_offset_map: Optional[Dict[str, torch.Tensor]] = None,
foot_ground_offset: float = 0.0,
nan_check: bool = False
):

📥 Input Parameters:

  • foot_names (List[str]): Robot foot link names for contact detection
  • min_subseq_ratio (float): Minimum ratio of cyclic subsequence length to total sequence length for data augmentation
  • data_correction_path (str): Path to manual data corrections
  • data_correction_keys (Dict[str, str]): Key mapping for correction data
  • add_flipped_data (bool): Generate left-right mirrored data for augmentation
  • floor_contact_offset_map (Dict[str, torch.Tensor]): Foot contact position offsets
  • foot_ground_offset (float): Ground clearance offset for foot contact
  • nan_check (bool): Enable NaN detection and handling

🔧 Processing Pipeline:

1. Dataset Loading and Cyclic Sequence Enhancement

# Load AMASS dataset with interpolation
for i, data in enumerate(tqdm(self.dataset, desc="Converting AMASS to Actions")):
poses = data['poses'].to(self.device)
fps = data['fps'].item()
title = data['title']
fpath = data['fpath']

# Quality filtering: reject short or low-quality sequences
if poses.shape[0] < 30: # Minimum 30 frames
continue

# Cyclic subsequence detection and extension for data augmentation
# This finds the longest cyclic pattern (e.g., walking gait) and extends it
# to ensure sufficient sequence length for reinforcement learning training
longest_subseq = find_longest_cyclic_subsequence(poses, min_ratio=min_subseq_ratio)
if longest_subseq is not None:
start_idx, end_idx = longest_subseq
cyclic_pattern = poses[start_idx:end_idx+1]

# Data augmentation: extend cyclic pattern to desired length
target_length = max(poses.shape[0], 200) # Ensure minimum training length
if cyclic_pattern.shape[0] > 0:
num_cycles = target_length // cyclic_pattern.shape[0] + 1
extended_poses = cyclic_pattern.repeat(num_cycles, 1)[:target_length]
poses = extended_poses

2. Pose-to-Action Conversion

@torch.no_grad()
def pose_to_action(self, pose: torch.Tensor) -> torch.Tensor:
"""Convert human poses to robot actions using trained model"""

if len(pose.shape) == 3:
pose = pose.squeeze(0)

pose_body = pose[:, 3:66] # Extract body pose (exclude root orientation)

# Batch processing for memory efficiency
actions = []
for i in range(0, pose_body.shape[0], self.batch_size):
batch = pose_body[i:i+self.batch_size]
batch_actions = self.model(batch)
actions.append(batch_actions)

return torch.cat(actions, dim=0)

3. Comprehensive Post-Processing

# Complete data enhancement pipeline
save_dict = {
"poses": poses,
"trans": trans,
"root_orient": root_orient,
"actions": actions,
"fps": fps,
"title": title,
"fpath": fpath
}

# Apply post-processing transformations
if self.fk.get_contact_links():
contact_tf_mats = self.fk.get_contact_tf_matrices()
save_dict = self.post_process(save_dict, contact_tf_mats=contact_tf_mats, filter_pose=True)
else:
save_dict = self.post_process(save_dict, filter_pose=True)

4. Foot Contact Analysis

# Foot contact detection using forward kinematics
robot_foot_positions = self.fk(actions, link_names=foot_names)
foot_velocities = torch.diff(robot_foot_positions, dim=1) * fps

# Contact detection based on velocity and height thresholds
contact_threshold_vel = 0.05 # m/s
contact_threshold_height = foot_ground_offset + 0.01 # m

foot_contact = []
for foot_idx, foot_name in enumerate(foot_names):
foot_vel = torch.norm(foot_velocities[:, foot_idx], dim=-1)
foot_height = robot_foot_positions[:, foot_idx, 2]

# Contact when velocity is low AND foot is near ground
contact = (foot_vel < contact_threshold_vel) & (foot_height < contact_threshold_height)
foot_contact.append(contact)

save_dict["foot_contact"] = torch.stack(foot_contact, dim=-1)

5. Data Augmentation (Optional)

if add_flipped_data and self.action_flip_left_right is not None:
# Generate left-right mirrored version
flipped_dict = self.flip_pose_actions_left_right(deepcopy(save_dict))

# Save both original and flipped versions
original_path = os.path.join(self.export_path, fpath, f"{title}_poses.pkl")
flipped_path = os.path.join(self.export_path, fpath, f"{title}_flipped_poses.pkl")

torch.save(save_dict, original_path)
torch.save(flipped_dict, flipped_path)

💡 Usage Examples

🚀 Basic Configuration and Usage

from GBC.utils.data_preparation.amass_action_converter import AMASSActionConverter, AMASSActionConverterCfg
from GBC.utils.data_preparation.robot_flip_left_right import YourFlipperModule
from GBC.utils.base.base_fk import RobotKinematics

# Define robot-specific joint mapping
mapping_table = {
"Pelvis": "base_link",
"L_Hip": "l_hip_yaw_link",
"R_Hip": "r_hip_yaw_link",
"L_Knee": "l_knee_link",
"R_Knee": "r_knee_link",
"L_Ankle": "l_ankle_pitch_link",
"R_Ankle": "r_ankle_pitch_link",
"L_Shoulder": "l_arm_roll_link",
"R_Shoulder": "r_arm_roll_link",
"L_Elbow": "l_elbow_roll_link",
"R_Elbow": "r_elbow_roll_link",
"L_Wrist": "l_wrist_roll_link",
"R_Wrist": "r_wrist_roll_link",
}

# Initialize configuration
cfg = AMASSActionConverterCfg(
urdf_path="/path/to/robot.urdf",
pose_transformer_path="/path/to/trained_model.pt",
export_path="/output/converted_actions",
mapping_table=mapping_table,
smplh_model_path="/path/to/smplh/model.npz",
dmpls_model_path="/path/to/dmpls/model.npz",
smpl_fits_dir="/path/to/fitted_params.pt",
device="cuda:0"
)

# Specify dataset subset using regex patterns
cfg.specialize_dir = ["ACCAD/*", "CMU/*"] # Process specific datasets

# Initialize converter
converter = AMASSActionConverter.from_cfg(cfg)

🔄 Advanced Configuration with Symmetry

# Configure bilateral symmetry for data augmentation
flipper = YourFlipperModule()
fk = RobotKinematics(cfg.urdf_path)
flipper.prepare_flip_joint_ids(fk.get_dof_names())
converter.set_flipper(flipper)

# Define foot contact parameters
foot_names = ["l_ankle_pitch_link", "r_ankle_pitch_link"]
foot_ground_offset = 0.05 # 5cm ground clearance

# Execute comprehensive conversion pipeline
converter.convert(
foot_names=foot_names,
foot_ground_offset=foot_ground_offset,
add_flipped_data=True, # Generate mirrored data
floor_contact_offset_map={
"l_ankle_pitch_link": torch.Tensor([0, 0, -foot_ground_offset]),
"r_ankle_pitch_link": torch.Tensor([0, 0, -foot_ground_offset]),
},
min_subseq_ratio=0.3, # Require cycle to be at least 30% of sequence for quality augmentation
nan_check=True # Enable NaN detection
)

🎯 Production Pipeline Example

# Production-grade configuration based on Turin robot
def setup_turin_converter():
"""Setup converter for Turin humanoid robot"""

# Turin-specific configuration
mapping_table = {
"Pelvis": "base_link",
"L_Hip": "l_hip_yaw_link", "R_Hip": "r_hip_yaw_link",
"L_Knee": "l_knee_link", "R_Knee": "r_knee_link",
"L_Ankle": "l_ankle_pitch_link", "R_Ankle": "r_ankle_pitch_link",
"L_Shoulder": "l_arm_roll_link", "R_Shoulder": "r_arm_roll_link",
"L_Elbow": "l_elbow_roll_link", "R_Elbow": "r_elbow_roll_link",
"L_Wrist": "l_wrist_roll_link", "R_Wrist": "r_wrist_roll_link",
}

cfg = AMASSActionConverterCfg(
urdf_path="/path/to/turin_humanoid.urdf",
pose_transformer_path="/path/to/turin_model.pt",
export_path="/data/turin_converted",
mapping_table=mapping_table,
smplh_model_path=DATA_PATHS.smplh_model_path,
dmpls_model_path=DATA_PATHS.dmpls_model_path,
smpl_fits_dir=DATA_PATHS.smplh_fit_result_path,
device="cuda:1"
)

# Process high-quality motion datasets
cfg.specialize_dir = ["ACCAD/*"] # ACCAD contains high-quality locomotion data

return cfg

# Initialize and execute
cfg = setup_turin_converter()
converter = AMASSActionConverter.from_cfg(cfg)

# Configure Turin-specific symmetry
from GBC.utils.data_preparation.robot_flip_left_right import TurinV3FlipLeftRight
flipper = TurinV3FlipLeftRight()
fk = RobotKinematics(cfg.urdf_path)
flipper.prepare_flip_joint_ids(fk.get_dof_names())
converter.set_flipper(flipper)

# Execute with Turin-specific parameters
foot_names = ["l_ankle_pitch_link", "r_ankle_pitch_link"]
foot_ground_offset = 0.05

converter.convert(
foot_names=foot_names,
foot_ground_offset=foot_ground_offset,
add_flipped_data=True,
floor_contact_offset_map={
"l_ankle_pitch_link": torch.Tensor([0, 0, -foot_ground_offset]),
"r_ankle_pitch_link": torch.Tensor([0, 0, -foot_ground_offset]),
},
min_subseq_ratio=0.25, # Accept shorter cycles for extension-based augmentation
nan_check=True
)

🔧 Advanced Features

📊 Data Quality Validation

Cyclic Subsequence Detection and Extension:

def find_longest_cyclic_subsequence(poses: torch.Tensor, min_ratio: float = 0.2) -> Optional[Tuple[int, int]]:
"""
Find longest cyclic subsequence for data augmentation in reinforcement learning

This function identifies the longest repeating pattern (e.g., walking gait cycle)
within a motion sequence. The detected cyclic pattern can then be extended/repeated
to create longer training sequences, which is crucial for RL algorithms that
require sufficient sequence length for effective learning.

Args:
poses: Input motion sequence [T, 66]
min_ratio: Minimum ratio of cyclic length to total sequence length
(e.g., 0.2 means cyclic pattern must be at least 20% of total length)

Returns:
Tuple of (start_idx, end_idx) for the longest cyclic pattern, or None
"""

# Focus on key locomotion joints for cycle detection
hip_knee_joints = poses[:, [6, 9, 12, 15]] # L_Hip, R_Hip, L_Knee, R_Knee

# Autocorrelation analysis for period detection
signal = torch.mean(hip_knee_joints, dim=1)
autocorr = torch.correlate(signal, signal, mode='full')

# Find peaks indicating cyclic patterns
peaks = find_peaks(autocorr.cpu().numpy(), distance=len(signal)//4)

if len(peaks) > 0:
period = peaks[0]
cycle_length = period

# Check if cycle meets minimum ratio requirement
if cycle_length >= len(signal) * min_ratio:
return (0, cycle_length)

# For data augmentation: even short cycles can be valuable if extended
if cycle_length > 10: # Minimum meaningful cycle length
return (0, cycle_length)

return None


def extend_cyclic_sequence(cyclic_pattern: torch.Tensor, target_length: int) -> torch.Tensor:
"""
Extend a cyclic motion pattern to desired length for RL training

This data augmentation technique repeats a detected cyclic pattern
(e.g., walking steps) to create longer sequences suitable for
reinforcement learning training, where longer episodes improve
learning stability and sample efficiency.

Args:
cyclic_pattern: Detected cyclic motion pattern [C, 66]
target_length: Desired output sequence length

Returns:
Extended sequence [target_length, 66]
"""
if cyclic_pattern.shape[0] == 0:
return cyclic_pattern

# Calculate number of repetitions needed
num_cycles = target_length // cyclic_pattern.shape[0] + 1

# Repeat pattern and trim to exact target length
extended_sequence = cyclic_pattern.repeat(num_cycles, 1)[:target_length]

# Smooth transitions between cycle boundaries for continuity
if num_cycles > 1:
cycle_len = cyclic_pattern.shape[0]
for i in range(1, num_cycles):
boundary_idx = i * cycle_len
if boundary_idx < extended_sequence.shape[0]:
# Apply smoothing window around cycle boundaries
smooth_window = min(5, cycle_len // 4)
for j in range(smooth_window):
if boundary_idx + j < extended_sequence.shape[0]:
alpha = j / smooth_window
extended_sequence[boundary_idx + j] = (
(1 - alpha) * extended_sequence[boundary_idx + j] +
alpha * extended_sequence[boundary_idx + j - cycle_len]
)

return extended_sequence

NaN Detection and Handling:

def check_nan_values(data_dict: Dict[str, torch.Tensor]) -> bool:
"""Comprehensive NaN detection across all data fields"""

nan_detected = False
for key, value in data_dict.items():
if isinstance(value, torch.Tensor):
if torch.isnan(value).any():
print(f"NaN detected in {key}")
nan_detected = True

# Interpolation-based repair for small gaps
if key in ["poses", "trans", "actions"]:
data_dict[key] = interpolate_nan_values(value)

return not nan_detected

🎨 Visualization and Debugging

Motion Trajectory Visualization:

def visualize_trajectory(self, trans: torch.Tensor, root_orient: torch.Tensor, 
axis_length: float, title: str, save_dir: str = None):
"""Visualize robot trajectory with orientation indicators"""

fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111, projection='3d')

# Plot trajectory
ax.plot(trans[:, 0], trans[:, 1], trans[:, 2], 'b-', linewidth=2, label='Trajectory')

# Add orientation arrows at key frames
step = max(1, len(trans) // 20)
for i in range(0, len(trans), step):
rot_mat = rot_vec_to_mat(root_orient[i:i+1])

# Draw coordinate frame
origin = trans[i]
x_axis = origin + rot_mat[0, :, 0] * axis_length
y_axis = origin + rot_mat[0, :, 1] * axis_length
z_axis = origin + rot_mat[0, :, 2] * axis_length

ax.quiver(*origin, *(x_axis-origin), color='r', arrow_length_ratio=0.1)
ax.quiver(*origin, *(y_axis-origin), color='g', arrow_length_ratio=0.1)
ax.quiver(*origin, *(z_axis-origin), color='b', arrow_length_ratio=0.1)

ax.set_xlabel('X (m)')
ax.set_ylabel('Y (m)')
ax.set_zlabel('Z (m)')
ax.set_title(f'{title} - Robot Trajectory')

if save_dir:
plt.savefig(os.path.join(save_dir, f'{title}_trajectory.png'), dpi=300)
plt.show()

Filter Response Analysis:

def visualize_filter(self, origin: torch.Tensor, filtered: torch.Tensor, 
max_duration: float = 25.0, title: str = "Filter Visualization",
save_dir: str = None, cyclic_subseq: Tuple = None):
"""Compare original vs filtered motion data"""

time_steps = min(int(max_duration * 30), origin.shape[0]) # 30 fps assumption
t = torch.arange(time_steps) / 30.0

fig, axes = plt.subplots(origin.shape[1], 1, figsize=(15, 3*origin.shape[1]))

for i in range(origin.shape[1]):
axes[i].plot(t, origin[:time_steps, i].cpu(), 'b-', alpha=0.7, label='Original')
axes[i].plot(t, filtered[:time_steps, i].cpu(), 'r-', linewidth=2, label='Filtered')

if cyclic_subseq:
start, end = cyclic_subseq
axes[i].axvspan(start/30, end/30, alpha=0.3, color='green', label='Cyclic Subsequence')

axes[i].set_ylabel(f'Joint {i}')
axes[i].legend()
axes[i].grid(True, alpha=0.3)

axes[-1].set_xlabel('Time (s)')
plt.suptitle(f'{title} - Filter Response')
plt.tight_layout()

if save_dir:
plt.savefig(os.path.join(save_dir, f'{title}_filter_comparison.png'), dpi=300)
plt.show()

🚨 Best Practices

✅ Data Processing Guidelines

  1. Quality Filtering: Use appropriate min_subseq_ratio values (0.2-0.3) to ensure cyclic patterns are substantial enough for meaningful data augmentation
  2. Coordinate Systems: Ensure consistent coordinate system transformations across all robots
  3. Contact Detection: Tune foot_ground_offset based on robot foot geometry
  4. Filtering Parameters: Adjust filter cutoff frequencies based on motion type and robot dynamics
  5. Memory Management: Use appropriate batch sizes for pose-to-action conversion

🔧 Configuration Optimization

Robot-Specific Tuning:

# Example configuration for different robot types
ROBOT_CONFIGS = {
"turin_v3": {
"foot_ground_offset": 0.05,
"filter_cutoff": 6.0,
"min_subseq_ratio": 0.25
},
"unitree_h1": {
"foot_ground_offset": 0.03,
"filter_cutoff": 8.0,
"min_subseq_ratio": 0.3
}
}

def get_robot_config(robot_type: str) -> Dict:
"""Get optimized configuration for specific robot"""
return ROBOT_CONFIGS.get(robot_type, ROBOT_CONFIGS["turin_v3"])

🐛 Troubleshooting

Common Issues and Solutions:

  1. Memory Overflow:

    # Reduce batch size in configuration
    cfg.batch_size = 32 # Instead of 64 or 128

    # Process smaller dataset subsets
    cfg.specialize_dir = ["ACCAD/Female1Walking_c3d"] # Specific subset
  2. Poor Contact Detection:

    # Adjust contact thresholds
    contact_threshold_vel = 0.02 # Lower for more sensitive detection
    contact_threshold_height = foot_ground_offset + 0.005 # Tighter height tolerance
  3. Quaternion Discontinuities:

    # Enable advanced quaternion repair
    root_quat = quat_fix(root_quat)
    root_orient = unwrap_and_smooth_rot_vecs(root_orient, smoothing_window=7)
  4. NaN Values in Output:

    # Enable comprehensive NaN checking
    converter.convert(
    foot_names=foot_names,
    nan_check=True, # Enable detection
    # Additional parameters...
    )

This comprehensive processing pipeline ensures that raw AMASS motion capture data is transformed into high-quality, robot-ready motion sequences that conform to the GBC Standard Data Format, enabling robust and consistent human-to-robot motion retargeting across diverse humanoid platforms.