Hi,
This evening, few of us had issues with green/red lights (reflection on the track), spotlights (me), black bar over the track and black parts under the roof.
In almost all case, testing could avoid issues... but not testing only in the simulator.
Few month ago, Aaron post a link to real raw frame of a race in jpg.
Test all your code with these frames (or frames recorded today) and you'll be ready for next session. You'll see that running your code in simulator (with white roof !) is very different that running it in real conditions.
In my case, I didn't complete my testing code before 9h00 monday so I run test over all frames with a global appreciation and forget 3 little frames... exactly at the place where my borg was going wrong (scared by a spotlight). At this time (so too late), my testing code is completed and I know why.
I think frames provided by Aaron could be helpfull for persons who didn't had the pleasure to see her borgs race around the track today !
Aaron post : https://www.formulapi.com/blog/monster-raw-footage-analysis
Regards
Ok so I have been going over some test images of the lights because this is the thing that scares me most, after all the hard work just sitting on the start line... especially as I have trouble with the simulation sometimes not detecting the lights! (I am running it on Linux and I know its unsupported)
But my question is with the raw footage and the SimulationImage.py
When I run it to detect the lights using the raw footage, the image shown (lightsFrame) is the track. The raw image is upside down (which i believe to be correct as the camera is upside down) but the processing expects the image to be the correct way up, as if I was to flip the image it would appear correct and show the lights.
I am using raw images of the light between 800 and 1200.
So should I flip the raw images to do my testing or is the image processing looking for the glare of the lights on the track?
On race day there does not seem to be a lights trial so this is something that needs to be bullet proff imo.
The images fed in to
SimulationImages.py
should be upside-down, like they are with the camera.I have had a look at the old raw images, and it seems they are the wrong way up (already rotated). This is a side-effect of how they have been recorded.
The newer raw images were taken without the code performing a rotate, so they match the actual camera images.
Basically
SimulationImages.py
will rotate the images itself to mirror how theStreamProcessor.run
function works inImageProcessor.py
. You should either rotate the images first so that they are upside-down to start with, or setflippedImage
toFalse
when using the pre-rotated images only.Hopefully that all makes sense :)
P.S.
During your testing time we can run the lights if you want, it just takes us a little longer than if you use the
NoLights_Front.py
script. We can also reset the robots or power cycle them if you wanted.Make sure you have a Twitch account ready so that you can talk to us on the Twitch chat if you want us to do anything or need any help.
Ok that makes sense thanks.
Hi all,
I used the simulator in previous season from linux, and it worked fine. I noticed a couple of things though, which you can check if that's why sometimes it doesn't detect them:
- it takes a few seconds before the start lights works. Add some log in your script just after waiting for the lights, and then wait around 5 seconds to be safe
- if the simulator window is small, lights were not properly detected. I noticed that on the laptop screen things would not work, but hiding the top and bottom bars of the linux desktop gave it the small extra size to work.
Regards,
Jorge
Yes I agree, the issue spans from the fact (I think) that on my laptop screen the simulation is squashed, and when I move it to the second larger screen I have to resize it into the correct position so it picks up the lights at the correct x and y...
I was not aware of that problem, possibly because all of the laptops / monitors around our office are 1920x1080 or very old!
I can only presume there is a bug with the OpenGL code in the simulator which causes the output for the camera not to render correctly if it is not fully visible on screen. I will make a note of it, but we are unlikely to have time to fix a problem of that sort in the next few weeks...
Hi, I wonder why simulator source code is not published, is there a reason ? Regards.
The simulator code was written by computernerd486 and is freely available on his GitHub here. If you have some experience with OpenGL then the code should be reasonably clear. We may have updated the settings since that version, but I am fairly certain the code itself is the latest copy.
We are really grateful to him for all the hard work on creating the simulator. We simply helped out with some data and the connecting Python script to use the simulation instead of the camera input. He deserves all the credit :)
See the Race Simulation post for an explanation of how the simulation has progressed during its early development :)
On a side note our spin view is a modified version of the same base code which does not show the track view and moves the camera around :D
Add new comment