MIT-PITT-RW Logs
Playing bags
Monza
ros2 bag play -s mcap /media/steven/ExtremePro1/rosbags/sensors_2023-06-06_13-53-57_-0400 -l
Indy track
ros2 bag play /media/steven/ExtremePro1/rosbags/sensors_2023-01-04_11-06-29_-0500 -l
camera
ros2 bag play -s mcap --start-offset 140 /media/steven/ExtremePro1/rosbags/cameras_2023-06-06_13-53-57_-0400 -l
Calibration
ros2 bag play -s mcap /media/steven/ExtremePro1/rosbags/calibration_2023-04-14_07-56-17_-0400 -l
SH-T bags that I downloaded:
~/rosbags/sensors_2023-01-03_14-18-52_-0500
~/rosbags/sensors_2023-01-06_17-09-10_-0500
For my mac configuration, I need to modify install/setup.bash
so it sources the right path.
Run the node
ros2 run SECOND_package wall_detection_dds
Use the launch config
ros2 launch SECOND_package wall_detect.launch.py
- Calibration
- Camera intrinsics, camera to camera/IMU/LiDAR
- LiDAR to LiDAR / camera / imu
- SLAM
- Open to VIO (i.e. see here) Or LiDAR (i.e. LIO-SAM or FAST-LIO 2)
- Recommend testing the VIO and then testing FAST-LIO 2
- Use front LiDAR & novatel_bottom/rawimux for IMU
- Speeding up / making camera pipelines more efficient
- Driver -> Object detection —> Data recording
Camera Driver: https://github.com/PITT-MIT-IAC/avt_vimba_camera
2023-12-05
Time for the comparison:
- Speed, accuracy, reliability
Benchmark in real time.
2023-12-04
We’re less than a month away from the Indy competition. Need to make sure I get the NITROS pipeline going.
I’m setting it up on my Jetson. Realizing that there isn’t enough storage to use the isaac_ros_common
docker image.
I tried mounting it externally, but it wasn’t working
/home/steven/joint-bvs-ws/src/drivers/vimba_camera/avt_vimba_camera/include/avt_vimba_camera/avt_vimba_camera.hpp:41:10: fatal error: diagnostic_updater/diagnostic_updater.hpp: No such file or directory
41 | #include <diagnostic_updater/diagnostic_updater.hpp>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Though the fix was just
sudo apt-get install ros-$ROS_DISTRO-diagnostic-updater
I did a rosdep install specifying the os, that seemed to fix it
rosdep install --from-paths src -y --ignore-src --os ubuntu:jammy
sudo apt install ros-humble-image-common
And then redo a rosdep install, and that fixes it.
so ament_auto_find_build_dependencies()
does a auto find??
There are multiple message types:
bvs_msgs
(our internal message types)novatel_oem7_msgs
(for LiDAR)vectornav_msgs
(?)delphi_mrr_msgs
(for Radar)
Another thing is that my PR is failing due to the CMake version being too low for the Dockerfile. I need to figure out where the actual base image is at
Now, which branch is latest? Asking andrew
- I found that
osrf/ros:humble-desktop
and up uses cmake 3.22.1, whereas our latest image still usesros:galactic
Alright, I am getting the high-level view of how our code is run. We look at our launch files
This is the main launch file that launches everything
src/common/bvs_common/launch/main.launch.py
Seems, however, that there is another layer on top of that. They don’t just use ros2 launch main.launch.py
, but rather integrate it into different bash scripts.
Note that we don’t use ComposableNode
s due to some packages being in Python.
I am going to work on making the logging done through NITROS.
The 2 main scripts from what I see:
main_launch.sh
-> callsmain.launch.py
launch_sensors_bagging.sh
-> calls a bunch of launch fiels
publish_compressed is by default false.
I need to upgrade the cmake version:
steven@steven-desktop:~/joint-bvs-ws$ pip install cmake
Requirement already satisfied: cmake in /home/steven/.local/lib/python3.8/site-packages (3.27.9)
steven@steven-desktop:~/joint-bvs-ws$ which cmake
/usr/bin/cmake
I am following this:
For some reason, can’t get things to build correct outside of the docker image, even though I have ROS installed.
I find myself needing to add this line of code, because it can’t find cuda_runtime.h
find_package(CUDA REQUIRED)
include_directories("${CUDA_INCLUDE_DIRS}")
2023-11-27
One of the things that I need to fundamentally understand is how ROS nodes get executed in C++. Specifically, there have been quite a few syntaxes that I’ve been exposed to:
int main(int argc, char** argv)
{
rclcpp::init(argc, argv);
auto node = std::make_shared<avt_vimba_camera::MonoCameraNode>();
node->start();
rclcpp::spin(node);
return 0;
}
Because at NVIDIA, I saw that our launch files where we use Composable Node didn’t specify an executable, see ROS Launch Files.
Andrew Saba was also telling me about how we don’t use tf2, and rather directly subscribe to /tf
due to it being much faster for real-time applications.
Alright, so in the codebase, they have a folder specifically for all the executables. Instead of me before where I had everything in the same folder.
So the goal is to just figure out how can I use managed NITROS (CUDA with NITROS), so I can make my own publishers and subscribers for NITROS images.
Right now, our codebase spawns up a new process for every node (look at our cameras.launch.py
for example).
Our middleware helps with eliminating the need to serialize and deserialize, by not using UDP but rather use Shared Memory, but that still has copy operations.
I’ve been following the managed nitros image builder example, and just adding in the publisher. It’s been quite straightforward.
I still don’t quite get why we need a custom publisher, and not use the one by ROS, i.e. rclcpp::Publisher
.
I get to have a sneak peak of how they do the compression, which is through this logic
In avt_vimba_api.hpp
if (publish_compressed){
// const auto debayer_start = std::chrono::high_resolution_clock::now();
compressed_image.header = image.header;
compressed_image.format = "jpeg";
cv::imencode(".jpg", output_mat, compressed_image.data,
std::vector<int>{
cv::IMWRITE_JPEG_QUALITY, 90
// TODO: Add more parameters here
}
);
2023-11-25
Been away from this team for way too long, I feel bad. Need to get back and prioritize so I can make video on this. Had a really productive discussion with Andrew Saba, 3 main things to work on, listed in terms of priority:
- Calibration
- Camera intrinsics, camera to camera/IMU/LiDAR
- LiDAR to LiDAR / camera / imu
- SLAM
- Open to VIO (i.e. see here) Or LiDAR (i.e. LIO-SAM or FAST-LIO 2)
- Recommend testing the VIO and then testing FAST-LIO 2
- Use front LiDAR & novatel_bottom/rawimux for IMU
- Speeding up / making camera pipelines more efficient
- Driver -> Object detection —> Data recording
Camera Driver: https://github.com/PITT-MIT-IAC/avt_vimba_camera
2023-09-30
I spoke with Micah.
Motivation: Monza.
Initial approach was SuperOdometry.
LIO-SAM FastSLAM
2023-09-26
Realizing that there is one more thing that I need to do before properly closing out this task, which is to get the left wall to be segmented properly.
2023-09-23
Alright, I am going to clean up the code and make the PR.
2023-09-18
I played around, Seems like intensity threshold of 0.5 is good
- using 0.4 threshold has some false positives
Also need a parameter for a minimum number of points.
Also, maybe add jit compiler flags?
- Lots of redundant code, need to cleanup
Then, the next task will probably be localization?
2023-09-12
Alright, I am finally taking the time to sit down and work on this. I cannot keep getting myself distracted. The saturday is like a once-in-a-while thingy. I am not the shit yet. Remember that you want to be invited for Open Sauce next year too.
For the mcap, I have to install
mamba install ros-humble-rosbag2-storage-mcap -y
So I added this piece of code, so that I can republish the cropped points
tout = PointCloud2()
tout.header.frame_id='luminar_front'
tout.data = pts.tobytes()
tout.width = pts.shape[0]
tout.height = 1
tout.point_step = 16
ros_dtype = PointField.FLOAT32
dtype = np.float32
itemsize = np.dtype(dtype).itemsize
fields = [PointField(
name=n, offset=i*itemsize, datatype=ros_dtype, count=1)
for i, n in enumerate('xyzi')]
tout.fields = fields
self.filtered_pointcloud_publisher.publish(tout)
So I’m realizing that this might be a pretty big problem, because of how big the track is. The car might not even be able to see the right wall.
Same but I increased the point size
Zoomed out:
So it kind of has this blind spot on the right.
In this area, we cannot see the left region
2023-08-31
Alright, it took the entirety of yesterday to download this 40gb rosbag. In the other rosbag (01-03-2022), the car was stationary over half the time.
This was the command for the older bag
ros2 bag play ~/rosbags/sensors_2023-01-03_14-18-52_-0500
I also think I finally set up ConnextDDS, details are in ConnextDDS.
It seems that they do subscriptions differently in ConnextDDS. They pipe it through rt/TOPIC_NAME
?
def subscriber_main(self):
self.participant_qos = dds.QosProvider.default.participant_qos
self.participant_qos.user_data.value = bytearray("password".encode())
self.participant = dds.DomainParticipant(self.DOMAIN_ID, self.participant_qos)
self.participant.enable()
self.msg_type = dds.QosProvider(DDS_MSG_FILE).type(
"sensor_msgs::msg::dds_::PointCloud2_"
)
self.topic = dds.DynamicData.Topic(
self.participant, "rt" + self.POINTCLOUD_TOPIC, self.msg_type
)
self.reader = dds.DynamicData.DataReader(
dds.Subscriber(self.participant), self.topic
)
self.reader.bind_listener(MsgListener(), dds.StatusMask.DATA_AVAILABLE)
So adding the rt
actually comes from here: https://design.ros2.org/articles/topic_and_service_names.html
“In order to differentiate ROS topics easily, all DDS topics created by ROS shall be automatically prefixed with a namespace like /rX
, where X
is a single character that indicates to which subsystem of ROS the topic belongs. For example, a plain topic called /foo
would translate to a DDS topic rt/foo
, which is the result of implicitly adding rt
to the namespace of a ROS topic which is in the root namespace /
and has a base name foo
.
Deciding to do a deep dive on ROS again today, as there are lots of foundational knowledge that I need, like how namespaces, DDS, and all those things are designed. They are not in my brain yet.
So this is a snapshot of the Indy track. The car goes counterclockwise, and there is a very apparent wall on the right, but no wall on the left. Only thing is that the lidar points get closer and closer to each other.
2023-08-30
Going to get this working on the indy track, though it is a little hard to see the track boundaries
2023-08-23
Alright, I am on the flight, need to get back onto perception. I am on the plane, so there are 2 things I can do, either:
- Continue reading up on SLAM
- Practice implementing some algorithms in ROS2
Forces me to do stuff, because I am offline anyways
Okay, so I need to break this task down, else I just get overwhelmed:
What is the problem?
We currently only use GNSS for localization. This works most of the time, but can fail in cases such as going into tunnels. With the car going at such high speeds, it is important that we build redundancy into our system.
We can also afford redundancy because we have good compute.
So something was implemented for the Monza track because the car was going inside a small tunnel. So first step is to make it work for the Indy track as well.
The Monza implementation:
- Tries to find the white lines, and using RANSAC, tries to fit the line
Steps
I am going to make several PRs.
- Adapt this so that it works on the Indy Track
- Localization
umm I am doing this on the airplane, and it seems that I really need wifi to make it work? I guess I will try to make it work without using the rosbag.
what does output=screen
do? In the launch description
from launch import LaunchDescription
from ament_index_python.packages import get_package_share_directory
import launch_ros.actions
import os
import yaml
from launch.substitutions import EnvironmentVariable
import pathlib
import launch.actions
from launch.actions import DeclareLaunchArgument
def generate_launch_description():
return LaunchDescription([
launch_ros.actions.Node(
package='SECOND_package',
executable='wall_detection_dds',
name='wall_detection_dds',
output='screen',
parameters=[os.path.join(get_package_share_directory("bvs_common"), 'config', 'ekf-full-state.yaml')],
),
])
2023-08-17
Alright, I have the ECE192 exam tomorrow, but this is way more important. I need to finish my first task.
So it seems that the /lidar_all
topic isn’t publishing at all for the Monza rosbag? I need to use /lidar/cropped
Bring this up at the next meeting.
Some things that I am observing about the codebase:
- They are using an
odom_buffer
np.eye
does what?
When the car is sterring, since we are just fitting lines, it doesn’t work well at all.
They use the ltp frame. It’s really hard to see.
We need a map frame.
Alright, I am just figuring out what we have. I saw here https://github.com/PITT-MIT-IAC/joint-bvs-ws/blob/main/docs/vehicle/basestation.md that they use to see how the car is relative to the raceline. So I assume they put the raceline on some /map
frame, and then localize the car, and drive accordingly.
- This is exactly what I’ve done with the F1TENTH
However, now, we want to build redundancy into the system. And potentially even do SLAM, to help the car localize.
- Before we do full on SLAM, we want to just be able to represent the track boundaries.
Observations from running this on the rviz:
- When the car steers, the lines aren’t perfectly straight.
So the wall detection toggles
This is pretty good
Cases where the lines don’t line up:
Beginning of the track for some reason
When the car is turning, lines suddenly cross each other
On sharp turns, a line to represent the line doesn’t work super well…
So now, I need to understand how this algorithm works exactly?
Some cool things:
They use a lock to process the lidar scan, since there is a pointcloud_callback
that updates self.pts
on every call.
from threading import Lock
self._lock = Lock()
def run_detection(self):
...
with self._lock:
if self.pts is not None:
pts = self.pts.copy()
# intensity = self.intensity.copy()
else:
pts = None
if pts is None:
return
The algorithm simply tries a line of best fit over each side.
Why do we have this algorithm when we already have a GPS?
- Adds redundancy into the system. In case the GPS fails. But what if this algorithm fails?
- We saw other teams use these localization methods instead of the GNSS. Might be faster / more reliable
2023-08-16
Okay, I need to get back on the grind. Need to figure out how to install ROS on my computer.
omgggg I found this: https://robostack.github.io/GettingStarted.html
THIS IS A LIFESAVER AND IT WORKS I CAN GET RVIZ ON MY M1 MAC THAT IS INSANE!!
ros2 bag play -s mcap ~/rosbags/sensors_2023-06-06_13-53-57_-0400
Yesssss, I am able visualize
Now, I want to run the node that the team wrote. Need to install vision_msgs
numba install ros-humble-vision-msgs
Running into errors, cannot build perception_msgs
…
colcon build --packages-select perception_msgs --cmake-args -DPython_EXECUTABLE=/Users/stevengong/mambaforge/envs/ros_env/bin/python
Turns out, it was because I had python3.11 installed with brew. Just uninstall that alongside numpy https://github.com/RoboStack/ros-humble/issues/39
Interesting, ConnextDDS for Indy Autonomous Challenge: https://community.rti.com/page/indy-autonomous-challenge
2023-08-08
I downloaded the files locally, on ~/rosbags
2023-08-07
There are all these errors, but to properly go through them, I need to first properly understand how these different architectures actually work.
I don’t just want to be copy pasting Brian Chen’s commands. These are fundamental things that you must understand.
No more “I don’t understand why it works, but it works”. That is noobish developer behavior.
Concepts learned: LLVM, GDB, x86, arm64, 32-bit vs. 64-bit Architecture,
Now, I still don’t fully understand how CMake builds all of this.
However, I also did spend quite a bit of time understanding how VSCode and the debugger works, and configuring a Clang Formatter.
I must absolutely watch the compiler video, but I gotta study now.
2023-08-06
Gonna try installing locally on my Macbook, the docker containers are way too painful.
Basically following the humble guide: https://docs.ros.org/en/humble/Installation/Alternatives/macOS-Development-Setup.html
But for pygraphviz, you need to link install it like this:
pip install --global-option=build_ext --global-option="-I$(brew --prefix graphviz)/include/" --global-option="-L$(brew --prefix graphviz)/lib/" pygraphviz
2023-08-05
So I still haven’t figured it out yesterday, just shows how I am a Terrible Programmer. I need more foundational knowledge.
I need to get foxglove working, so I can visualize the data. There are not other ways, rviz is WAY too slow.
I did get Foxglove to work with the NuScenes bags, but that was running ROS Humble, so maybe that plays a difference.
2023-08-04
Okay, let me get this running. Though it is really good that I am getting the fundamentals down.
Okay, so it might actually be because of the formatting of the rosbags. What an oversight by me.
So it seems that there are two different formats a rosbag can have, either sqlite
or mcap
.
- Basically, I was trying to run the nuscenes bag, because i know that works.
I’m not sure if we need to install mcap? https://github.com/ros2/rosbag2/tree/rolling/rosbag2_storage_mcap or if it comes out of the box
sudo apt update -y && sudo apt upgrade -y
sudo apt install ros-$ROS_DISTRO-foxglove-bridge -y
ros2 bag play -s mcap /nuscenes/NuScenes-v1.0-mini-scene-0553/NuScenes-v1.0-mini-scene-0553_0.mcap --topics /CAM_FRONT/image_rect_compressed
So it runs, but after a few seconds, I get:
[foxglove_bridge-1] 1691183703.106944 [] foxglove_b: create_thread: foxglove_bridge: no free slot
So it crashes after running a bit. It seems to be an issue specific to cycloneDDS? https://github.com/foxglove/ros-foxglove-bridge/issues/254
- Why did I not get this error when I was doing it from inside the other containers? Maybe because of their launch configuration? or because they were in separate containers?
I hate figuring issues by comparing what works and what doesn’t work.
I used num_threads = 64 and that fixed the issue?? https://github.com/foxglove/ros-foxglove-bridge/blob/main/ros2_foxglove_bridge/launch/foxglove_bridge_launch.xml
I finally am asking ChatGPT https://chat.openai.com/share/a209af92-2e42-4123-b5b8-dc351ca7810d
- “If you’re using the default Docker network mode (“bridge”), it can cause issues because it does not support UDP multicast in the best manner.”
I remember trying “host” mode yesterday and that helped a lot.
So when you start up foxglove, just make sure to use a different port.
So to summarize, the issue came down to CycloneDDS having issues communicating within the Docker container. This is why when I switched to FastRTPS, the problems were gone. I saw somewhere that the team was using CycloneDDS, so I thought it’s best if I try sticking to that.
- This also causes CycloneDDS to crash
But then, I switched to FastRTPS, but the foxglove stuff still wasn’t displaying. Maybe it’s because of the Rosbag type? Let’s try the other rosbag that is in mcap.
TODO
Really understand why “If you’re using the default Docker network mode (“bridge”), it can cause issues because it does not support UDP multicast in the best manner.”
Commands
sudo apt update -y && sudo apt upgrade -y
sudo apt install ros-$ROS_DISTRO-foxglove-bridge -y
Play the rosbag
sudo apt install ros-$ROS_DISTRO-rosbag2-storage-mcap -y
Indy track
ros2 bag play /rosbags/sensors_2023-01-03_12-46-13_-0500/ --topics /lidar_all
Monza
ros2 bag play -s mcap /rosbags/sensors_2023-06-06_13-53-57_-0400 --topics /lidar/cropped
ros2 bag play /rosbags/sensors_2023-06-06_13-53-57_-0400 —topics /lidar_all
Start up foxglove (port is occupied so change it)
ros2 run foxglove_bridge foxglove_bridge --ros-args -p port:=8766
2023-08-03
I’m thinking the reason I can’t topic echo the rosbag is because of the fact that I am playing the rosbag from a mounted directory, which is on another filesystem (thor).
I figured this out by running df /mnt/wato-drive
, and then it said thor. However, I was running the container from inside trpro, so that probably plays a big factor.
Yes!! that is the reason.
Can now run rosbags and listen
So if I play the bags on another mounted hard drive, and I do ros2 topic hz
/lidar_all
, it just doesn’t work. But if they are on the same drives, it works…? I don’t know why.Actually, it wasn’t because they were on different drives. I misdiagnosed the issue. This is because you operate too high on the Abstraction.
Usually, I would stop here, but now, that is not good enough. It is not good enough to just make things work. It is important to understand why things work and don’t.
- #todo So why?
you need to keep trying.
The thing is, having everything in Docker makes it so much more complicated. I really hate that :((
I’m really going to get my fundamentals down, to understand how networking works. I first got this introduction working at Ericsson, but feel that I still don’t know much.
So I spent like 5 hours learning more about computer networking, went through some of the textbook for ECE358.
I think the issue lies in the buffer size, since I am getting the complaint
1691117453.035757 [2] ros2: failed to increase socket receive buffer size to 1048576 bytes, continuing with 425984 bytes
That is 0.425MB, but one lidar scan alone is 0.88MB, so the messages cannot be transmitted successfully.
I am having issues with CycloneDDS, but FastRTPS is the default (or is it FastDDS?), and I think that is the one that we are using. So I added
echo "export RMW_IMPLEMENTATION=rmw_fastrtps_cpp" >> ~/.bashrc
I think this is because we are working with ROS Galactic, which uses CycloneDDS as default. But FastRTPS is the default afterwards.
- I am trying to sanity check this by checking the $RMW_IMPLEMENTATION variable in a fresh docker container, but that variable is not set??
I wanted to increase the socket buffer size, but I would need sudo permission for that. Fix is to use FastRTPS for now, https://docs.ros.org/en/rolling/How-To-Guides/DDS-tuning.html
- this is a bad fix though, can you not just reduce the buffer size?
UMMM this is still super annoying, I don’t understand what they are doing with the Dockerfile. I tried a raw humble container, like this
docker run -it ros:galactic-ros-base
But that doesn’t have the $RMW_IMPLEMENTATION
environment variable configured, but our custom container does. However, we didn’t seem to configure that anywhere, so I am super confused. And it seems that we have cycloneDDS under /etc/cyclone
- That is from the https://github.com/PITT-MIT-IAC/bvs_docker_base/blob/main/Dockerfile container, because they explicitly installed cyclonedds, or maybe that is what comes on osrf/ros:galactic-desktop image?
- Noo, even the apt install doesn’t install it
I DONT UNDERSTAND…
WHERE DO THESE THINGS COME FROM.
2023-08-02
Similar to getting started on WATO, it’s kind of annoying setting up everything. If I do everything locally, it’s so much easier. But oh well, use Foxglove and shit.
I think the workaround is to permanently have this container on: Add a mount
For every fresh container, just install Foxglove
sudo apt update -y
sudo apt install ros-$ROS_DISTRO-foxglove-bridge -y
And then just stream the data through fox
lsof -ti:8766 | xargs kill -9
ssh -NfL 8766:localhost:8766 s36gong@trpro-ubuntu1.watocluster.local
lsof -ti:8767 | xargs kill -9
ssh -NfL 8767:localhost:8767 s36gong@trpro-ubuntu1.watocluster.local
I downloaded the Rosbag from the google drive
import gdown
url = "https://drive.google.com/drive/folders/1URycEFbRSt1GLmvvn73Cy61y2kGYXsbu?usp=drive_link"
gdown.download_folder(url,quiet=False)
ros2 bag play /rosbags/sensors_2023-01-03_12-46-13_-0500/ --topics /lidar_all
So I am able to play the bags, but because I am working in Docker containers, there are lots of discovery issues; https://github.com/eclipse-cyclonedds/cyclonedds/issues/677
I think the issue is because I am playing data that is actually mounted, so there are communication issues?
2023-08-01
It’s an hour before the meeting, and it really seems like I haven’t done an awful lot this week. What a way to start disappointing the team, though to be fair, you were grinding out the edit. But with the extra hour, make sure to make the post out of it.
The first thing is to clone the repositories (had to add a .git ssh key):
- Get started with the LGSVL Simulation
- Do the localization thingy with https://github.com/PITT-MIT-IAC/efficient-perception-ros/blob/master/efficient-perception-ROS/src/SECOND_package/SECOND_package/wall_detect_node_lowpass_dds.py
I cloned the repo. Now, trying to run the perception stack
Questions:
- What does LGSVL stand for?
- oooh, that’s the LG simulator: https://www.svlsimulator.com/
- super cool
So right now, I am just trying to figure out how they test their code.
I’m supposed to improve upon the wall detection node.
They’re not using the standard callback stuff, they’re using ReentrantCallbackGroup.
So I’m first trying to understand what the code is doing from here:
They are fitting a line on each wall, and then you can infer the heading of the car from these two walls.
Questions for tonight:
[x] In terms of development process:
1. Run `./lgsvl-config/up.sh`
2. Then, go inside the docker container and attach the VSCode shell
3. You need to colcon build first? What about the pip dependencies? Is everything set up first?
rti connectdds is not available?
So how do we do localization inside the walls?
- Basically, get the position of the car with respect to the walls? And combine it with a kalman filter, but that should be the job of another node
Right now, the algorithm tries to draw a line of best fit for the left and right wall.
use vehicle https://drive.google.com/drive/u/0/folders/1WAEG5hr-Es_2fhwCqZv_s-DvtoWWTapS April or may
- adapt code for the vegas track
find the distance to the track points find the position of car laterally
Seems like Arjun and Connor like to run things locally because using docker is kind of annoying.
I notice some bottlenecks:
- They’re putting rosbags on the google drive, which is super annoying
2023-07-25
First meeting with the MIT-PITT-RW team.
They are working on auto labelling for lidar.
Super odometry:
- imu + lidar
Some teams use lidar to estimate the track bounds, rather than purely GPS.
knows which is working.
my first task would be too get started with the localization indoors.
I find it nice and ironic how I lost the F1TENTH competition because I had really shitty localization. Now I get to work on this and redeem myself :)