Skip to content

Commit 2b4b4ae

Browse files
committed
more flexible kinematic model + handling different regional decimal separators + Section on how to run faster inference in Readme
1 parent bd159da commit 2b4b4ae

File tree

8 files changed

+95
-71
lines changed

8 files changed

+95
-71
lines changed

Pose2Sim/Demo_Batch/Config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ det_frequency = 4 # Run person detection only every N frames, and inbetween trac
8282
device = 'auto' # 'auto', 'CPU', 'CUDA', 'MPS', 'ROCM'
8383
backend = 'auto' # 'auto', 'openvino', 'onnxruntime', 'opencv'
8484

85-
tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
85+
tracking_mode = 'sports2d' # 'none', 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
8686
deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}""" # """{dictionary between 3 double quotes}"""
8787
# More robust in crowded scenes but Can be tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
8888
# Note: For faster and more robust tracking, use {'embedder_gpu': True, embedder':'torchreid'}, which uses the GPU and runs osnet_ain_x1_0 by default. requires `pip install torch torchvision torchreid gdown tensorboard`

Pose2Sim/Demo_Batch/Trial_1/Config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
# device = 'auto' # 'auto', 'CPU', 'CUDA', 'MPS', 'ROCM'
8282
# backend = 'auto' # 'auto', 'openvino', 'onnxruntime', 'opencv'
8383

84-
# tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
84+
# tracking_mode = 'sports2d' # 'none', 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
8585
# deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}""" # """{dictionary between 3 double quotes}"""
8686
# # More robust in crowded scenes but Can be tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
8787
# # Note: For faster and more robust tracking, use {'embedder_gpu': True, embedder':'torchreid'}, which uses the GPU and runs osnet_ain_x1_0 by default. requires `pip install torch torchvision torchreid gdown tensorboard`

Pose2Sim/Demo_Batch/Trial_2/Config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ participant_mass = [70.0, 63.5] # float (eg 70.0), or list of floats (eg [70
8080
# device = 'auto' # 'auto', 'CPU', 'CUDA', 'MPS', 'ROCM'
8181
# backend = 'auto' # 'auto', 'openvino', 'onnxruntime', 'opencv'
8282

83-
# tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
83+
# tracking_mode = 'sports2d' # 'none', 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
8484
# deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}""" # """{dictionary between 3 double quotes}"""
8585
# # More robust in crowded scenes but Can be tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
8686
# # Note: For faster and more robust tracking, use {'embedder_gpu': True, embedder':'torchreid'}, which uses the GPU and runs osnet_ain_x1_0 by default. requires `pip install torch torchvision torchreid gdown tensorboard`

Pose2Sim/Demo_MultiPerson/Config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ det_frequency = 4 # Run person detection only every N frames, and inbetween trac
8181
device = 'auto' # 'auto', 'CPU', 'CUDA', 'MPS', 'ROCM'
8282
backend = 'auto' # 'auto', 'openvino', 'onnxruntime', 'opencv'
8383

84-
tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
84+
tracking_mode = 'sports2d' # 'none', 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
8585
deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}""" # """{dictionary between 3 double quotes}"""
8686
# More robust in crowded scenes but Can be tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
8787
# Note: For faster and more robust tracking, use {'embedder_gpu': True, embedder':'torchreid'}, which uses the GPU and runs osnet_ain_x1_0 by default. requires `pip install torch torchvision torchreid gdown tensorboard`

Pose2Sim/Demo_SinglePerson/Config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ det_frequency = 4 # Run person detection only every N frames, and inbetween trac
8282
device = 'auto' # 'auto', 'CPU', 'CUDA', 'MPS', 'ROCM'
8383
backend = 'auto' # 'auto', 'openvino', 'onnxruntime', 'opencv'
8484

85-
tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
85+
tracking_mode = 'sports2d' # 'none', 'sports2d' or 'deepsort'. 'deepsort' is slower but more robust in difficult configurations
8686
deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}""" # """{dictionary between 3 double quotes}"""
8787
# More robust in crowded scenes but Can be tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
8888
# Note: For faster and more robust tracking, use {'embedder_gpu': True, embedder':'torchreid'}, which uses the GPU and runs osnet_ain_x1_0 by default. requires `pip install torch torchvision torchreid gdown tensorboard`

Pose2Sim/kinematics.py

Lines changed: 48 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,9 @@
4747
best_coords_for_measurements, compute_height
4848
from Pose2Sim.skeletons import *
4949

50+
import locale
51+
locale.setlocale(locale.LC_NUMERIC, 'C')
52+
5053

5154
## AUTHORSHIP INFORMATION
5255
__author__ = "Ivan Sun, David Pagnon"
@@ -110,30 +113,21 @@ def get_markers_path(pose_model, osim_setup_dir):
110113
- markers_path: (Path) Path to the marker file.
111114
'''
112115

113-
if pose_model == 'BODY_25B':
114-
marker_file = 'Markers_Body25b.xml'
115-
elif pose_model == 'BODY_25':
116-
marker_file = 'Markers_Body25.xml'
117-
elif pose_model == 'BODY_135':
118-
marker_file = 'Markers_Body135.xml'
119-
elif pose_model == 'BLAZEPOSE':
120-
marker_file = 'Markers_Blazepose.xml'
121-
elif pose_model == 'HALPE_26':
122-
marker_file = 'Markers_Halpe26.xml'
123-
elif pose_model == 'HALPE_68' or pose_model == 'HALPE_136':
124-
marker_file = 'Markers_Halpe68_136.xml'
125-
elif pose_model == 'COCO_133' or pose_model == 'COCO_133_WRIST':
126-
marker_file = 'Markers_Coco133.xml'
127-
# elif pose_model == 'COCO' or pose_model == 'MPII':
128-
# marker_file = 'Markers_Coco.xml'
129-
elif pose_model == 'COCO_17':
130-
marker_file = 'Markers_Coco17.xml'
131-
elif pose_model == 'LSTM':
132-
marker_file = 'Markers_LSTM.xml'
116+
pose_model = ''.join(pose_model.split('_')).lower()
117+
if pose_model == 'halpe68' or pose_model == 'halpe136':
118+
marker_file = 'Markers_Halpe68_136.xml'.lower()
119+
elif pose_model == 'coco133' or pose_model == 'coco133wrist':
120+
marker_file = 'Markers_Coco133.xml'.lower()
133121
else:
134-
raise ValueError(f"Pose model '{pose_model}' not supported yet.")
122+
marker_file = f'Markers_{pose_model}.xml'.lower()
135123

136-
markers_path = osim_setup_dir / marker_file
124+
try:
125+
markers_path = [
126+
f for f in osim_setup_dir.glob('Markers_*.xml')
127+
if f.name.lower() == marker_file
128+
][0]
129+
except:
130+
raise ValueError(f"Pose model '{pose_model}' not supported yet.")
137131

138132
return markers_path
139133

@@ -150,30 +144,21 @@ def get_scaling_setup(pose_model, osim_setup_dir):
150144
- scaling_setup_path: (Path) Path to the OpenSim scaling setup file.
151145
'''
152146

153-
if pose_model == 'BODY_25B':
154-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Body25b.xml'
155-
elif pose_model == 'BODY_25':
156-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Body25.xml'
157-
elif pose_model == 'BODY_135':
158-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Body135.xml'
159-
elif pose_model == 'BLAZEPOSE':
160-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Blazepose.xml'
161-
elif pose_model == 'HALPE_26':
162-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Halpe26.xml'
163-
elif pose_model == 'HALPE_68' or pose_model == 'HALPE_136':
164-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Halpe68_136.xml'
165-
elif pose_model == 'COCO_133' or pose_model == 'COCO_133_WRIST':
166-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Coco133.xml'
167-
# elif pose_model == 'COCO' or pose_model == 'MPII':
168-
# scaling_setup_file = 'Scaling_Setup_Pose2Sim_Coco.xml'
169-
elif pose_model == 'COCO_17':
170-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Coco17.xml'
171-
elif pose_model == 'LSTM':
172-
scaling_setup_file = 'Scaling_Setup_Pose2Sim_LSTM.xml'
147+
pose_model = ''.join(pose_model.split('_')).lower()
148+
if pose_model == 'halpe68' or pose_model == 'halpe136':
149+
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Halpe68_136.xml'.lower()
150+
elif pose_model == 'coco133' or pose_model == 'coco133wrist':
151+
scaling_setup_file = 'Scaling_Setup_Pose2Sim_Coco133.xml'.lower()
173152
else:
174-
raise ValueError(f"Pose model '{pose_model}' not supported yet.")
153+
scaling_setup_file = f'Scaling_Setup_Pose2Sim_{pose_model}.xml'.lower()
175154

176-
scaling_setup_path = osim_setup_dir / scaling_setup_file
155+
try:
156+
scaling_setup_path = [
157+
f for f in osim_setup_dir.glob('Scaling_Setup_Pose2Sim_*.xml')
158+
if f.name.lower() == scaling_setup_file
159+
][0]
160+
except:
161+
raise ValueError(f"Pose model '{pose_model}' not supported yet.")
177162

178163
return scaling_setup_path
179164

@@ -190,30 +175,24 @@ def get_IK_Setup(pose_model, osim_setup_dir):
190175
- ik_setup_path: (Path) Path to the OpenSim IK setup file.
191176
'''
192177

193-
if pose_model == 'BODY_25B':
194-
ik_setup_file = 'IK_Setup_Pose2Sim_Body25b.xml'
195-
elif pose_model == 'BODY_25':
196-
ik_setup_file = 'IK_Setup_Pose2Sim_Body25.xml'
197-
elif pose_model == 'BODY_135':
198-
ik_setup_file = 'IK_Setup_Pose2Sim_Body135.xml'
199-
elif pose_model == 'BLAZEPOSE':
200-
ik_setup_file = 'IK_Setup_Pose2Sim_Blazepose.xml'
201-
elif pose_model == 'HALPE_26':
202-
ik_setup_file = 'IK_Setup_Pose2Sim_Halpe26.xml'
203-
elif pose_model == 'HALPE_68' or pose_model == 'HALPE_136':
204-
ik_setup_file = 'IK_Setup_Pose2Sim_Halpe68_136.xml'
205-
elif pose_model == 'COCO_133' or pose_model == 'COCO_133_WRIST':
206-
ik_setup_file = 'IK_Setup_Pose2Sim_Coco133.xml'
207-
# elif pose_model == 'COCO' or pose_model == 'MPII':
208-
# ik_setup_file = 'IK_Setup_Pose2Sim_Coco.xml'
209-
elif pose_model == 'COCO_17':
210-
ik_setup_file = 'IK_Setup_Pose2Sim_Coco17.xml'
211-
elif pose_model == 'LSTM':
212-
ik_setup_file = 'IK_Setup_Pose2Sim_withHands_LSTM.xml'
178+
pose_model = ''.join(pose_model.split('_')).lower()
179+
if pose_model == 'halpe68' or pose_model == 'halpe136':
180+
ik_setup_file = 'IK_Setup_Pose2Sim_Halpe68_136.xml'.lower()
181+
elif pose_model == 'coco133' or pose_model == 'coco133wrist':
182+
ik_setup_file = 'IK_Setup_Pose2Sim_Coco133.xml'.lower()
183+
elif pose_model == 'lstm':
184+
ik_setup_file = 'IK_Setup_Pose2Sim_withHands_LSTM.xml'.lower()
213185
else:
186+
ik_setup_file = f'IK_Setup_Pose2Sim_{pose_model}.xml'.lower()
187+
188+
try:
189+
ik_setup_path = [
190+
f for f in osim_setup_dir.glob('IK_Setup_Pose2Sim_*.xml')
191+
if f.name.lower() == ik_setup_file
192+
][0]
193+
except:
214194
raise ValueError(f"Pose model '{pose_model}' not supported yet.")
215195

216-
ik_setup_path = osim_setup_dir / ik_setup_file
217196
return ik_setup_path
218197

219198

@@ -681,6 +660,10 @@ def kinematics_all(config_dict):
681660
logging.info(f"\tScaled model saved to {(kinematics_dir / (trc_file.stem + '_scaled.osim')).resolve()}")
682661

683662
logging.info("\nInverse Kinematics...")
663+
import time
664+
start_time = time.time()
684665
perform_IK(trc_file, kinematics_dir, osim_setup_dir, pose_model, remove_IK_setup=remove_IK_setup)
666+
end_time = time.time()
667+
print(f"\tIK took {round(end_time - start_time, 2)} seconds for {trc_file.name}.")
685668
logging.info(f"\tDone. OpenSim logs saved to {opensim_logs_file.resolve()}.")
686669
logging.info(f"\tJoint angle data saved to {(kinematics_dir / (trc_file.stem + '.mot')).resolve()}\n")

Pose2Sim/poseEstimation.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -258,6 +258,8 @@ def process_video(video_path, pose_tracker, pose_model, output_format, save_vide
258258
if tracking_mode == 'sports2d':
259259
if 'prev_keypoints' not in locals(): prev_keypoints = keypoints
260260
prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores)
261+
else:
262+
pass
261263

262264
# Save to json
263265
if 'openpose' in output_format:

README.md

Lines changed: 40 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,7 @@ Pose2Sim stands for "OpenPose to OpenSim", as it originally used *OpenPose* inpu
7272
3. [Demonstration Part-2: Visualize your results with OpenSim or Blender](#demonstration-part-2-visualize-your-results-with-opensim-or-blender)
7373
4. [Demonstration Part-3: Try multi-person analysis](#demonstration-part-3-try-multi-person-analysis)
7474
5. [Demonstration Part-4: Try batch processing](#demonstration-part-4-try-batch-processing)
75+
6. [Too slow for you?](#too-slow-for-you)
7576
2. [Use on your own data](#use-on-your-own-data)
7677
1. [Setting up your project](#setting-up-your-project)
7778
2. [2D pose estimation](#2d-pose-estimation)
@@ -178,7 +179,7 @@ Type `ipython`, and try the following code:
178179
from Pose2Sim import Pose2Sim
179180
Pose2Sim.calibration()
180181
Pose2Sim.poseEstimation()
181-
Pose2Sim.synchronization() # When the GUI is prompted, make sure only RWrist point is selected
182+
Pose2Sim.synchronization()
182183
Pose2Sim.personAssociation()
183184
Pose2Sim.triangulation()
184185
Pose2Sim.filtering()
@@ -315,6 +316,44 @@ Run Pose2Sim from the `BatchSession` folder if you want to batch process the who
315316
For example, try uncommenting `[project]` and set `frame_range = [10,99]`, or uncomment `[pose]` and set `mode = 'lightweight'` in the `Config.toml` file of `Trial_2`.
316317

317318

319+
<br/>
320+
321+
## Too slow for you?
322+
323+
**Quick fixes:**
324+
- `Pose2Sim.calibration()`:\
325+
Run it only when your cameras are moved or changed. If they are not, just copy a previous calibration.toml file into your new calibration folder.
326+
- `Pose2Sim.poseEstimation()`:
327+
- Set `det_frequency = 100` in Config.toml. Do not run the person detector every frame. Run it once, and then track the bounding boxes, which is much faster (pose estimation will still be run every frame): .\
328+
*150 s -> 30 s on my laptop with the Demo videos*
329+
- Use `mode = 'lightweight'`: Will use a lighter version of RTMPose, which is faster but less accurate\
330+
*30 s -> 20 s*
331+
- Set `display_detection = false`. Do not display results in real time\
332+
*20 s -> 15 s*
333+
- Set `save_video = 'none'`. Do not save images and videos\
334+
*15 s -> 9 s*
335+
- Set `tracking_mode = 'sports2d'` or `tracking_mode = 'none'`. If several persons are in the scene, use the sports2d tracker or no tracker at all, but not 'deepsort' (sports2d tracking is almost instantaneous though).
336+
- `Pose2Sim.synchronization()`:\
337+
Do not run if your cameras are natively synchronized.
338+
- `Pose2Sim.personAssociation()`:\
339+
Do not run if there is only one person in the scene.
340+
- `Pose2Sim.triangulation()`:\
341+
Not much to do here.
342+
- `Pose2Sim.filtering()`:\
343+
You can skip this step, but it is quite fast already.
344+
- `Pose2Sim.markerAugmentation()`:\
345+
Very fast, too. Note that marker augmentation won't necessarily improve results so you can consider skipping it.
346+
- `Pose2Sim.kinematics()`:\
347+
Set `use_simple_model = true`. Use a simpler OpenSim model, without muscles and constraints. Note that the spine will be stiff and shoulders will be a simple ball joint, but this is accurate enough for most gait-related tasks\
348+
9 s -> 0.7 s
349+
350+
<br>
351+
352+
**Use your GPU**:\
353+
This would make pose estimation significantly faster, without any impact on accuracy. See [Installation](#installation) section for more information.
354+
355+
356+
318357
</br></br>
319358

320359
# Use on your own data

0 commit comments

Comments
 (0)