Tips on speeding up reading the codrone camera in python?



  • Hi all,

    I have been working on a python script to control the drone with an xbox 360 controller and take the video from the drone and overlay some stuff on it like sensor data for a HUD display. and then once the script ends to process the video and recognize people.

    I'm having an issue where the script lags. I suspect because reading the camera is taking longer than I would like and slowing down the program. At 20 fps i have .05 seconds per loop, and sometimes reading the camera is taking rather longer that .05 seconds. Do you have any tips on speeding it up? I was thinking I might try some threading when reading the camera as per:https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/

    Here is a screenshot of some of the logging output:
    ![logging screenshot](0_1569715268839_logging screenshot.png image url)

    and here's a screenshot of some profiling i did with cProfile. I haven't used this before and am not great with it, but the time.sleep part looks weird to me. I'm not sure where this is getting called but am suspicious of it too. Maybe the open cv reads are sleeping while its encoding or decoding...
    0_1569715377067_cprofile output.png

    here's the loop part of the code that loops while the camera is opened

    while cap.isOpened():
            loop_start_time = time.time()
            now = time.time()
            r, frame = cap.read()
            logging.info("Reading frame took {} seconds".format(time.time() - now))
            # incrementing sensor_data, only read non-real time
            # sensors every 20th frame. I defined which to not check each loop
            # based on what I wanted real time updates on
            sensor_data = update_sensor_information_logger(drone, sensor_data)
            if sensor_data_counter % 20 == 0:
                sensor_data = update_non_real_time_info_logger(drone, sensor_data)
                logging.info("Roll: {} Yaw: {} Pitch: {} Battery: {}".format(
                     sensor_data["gyro_angles"].ROLL, sensor_data["gyro_angles"].YAW,
                     sensor_data["gyro_angles"].PITCH,sensor_data["battery_percentage"]))
                sensor_data_counter = 0
            sensor_data_counter += 1
    
            # refreshing data from joystick
            pygame.event.get()
            # sending joystick commands, only if drone is in flight
            sensor_data = command_top_gun_logger(drone, joystick, sensor_data, exit_flag)
    
            # overlaying HUD on the frame
            frame = hud_display_logger(frame, width, height, sensor_data, font)
    
            # reading the buttons
            sensor_data = get_joystick_buttons_logger(sensor_data, joystick)
    
            # button is take off
            if sensor_data["button_A"] and not take_off:
                drone.takeoff()
                take_off = True
            # button B is land
            elif sensor_data["button_B"] and take_off:
                drone.land()
                take_off = False
            # button X is kill camera, start post-processing
            elif sensor_data["button_X"]:
                exit_flag = True
            # button Y is emergency stop
            elif sensor_data["button_Y"]:
                drone.emergency_stop()
                take_off = False
                break
    
            if exit_flag:
                break
            # displaying the frame and writing it
            now = time.time()
            cv2.imshow("frame", frame)
            out.write(frame)
            logging.info("Writing/showing frame took {} seconds".format(time.time() - now))
            logging.info("full loop took {} seconds".format(time.time() - loop_start_time))
    

  • administrators

    Reduce the image captured size by scaling it down or simply just crop the area you need. If you have a 320X240 = 76,800 pixel image and you half the width and height you will have 160X120 = 19,200 pixels which significantly drops the number of math operations on each pixel. Let me know if that helps you out. Also it appears you are writing the image out.write(frame), comment that out and see if you notice a speed inscrease. I would recommend threading the program so you can thread the camera and the sensor updates.

    Arnold


  • administrators

    @opencvfr3ak Tagging you to make sure you got this response. Let me know if you have any more questions!



  • I did miss this, thanks for tagging me!

    I'll try reducing the size of the image, commenting out the writing, and possibly threading. Thanks so much for the comments!



  • Thanks again Arnold and Leila for the help!

    My issue was really with the cap.read() method and implementing threading so that in another thread the camera is read and a frame is saved, so the main thread can just grab whatever frame is available helped alot. This blog post helped me a ton in that regard: https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/

    My next issue was with the calls to get the gyro angle data that took quite some time occasionally, and the move() command that also took quite some time. I ended up deciding to simply take the input on yaw/throttle/pitch/roll from the controller and layer that into the HUD to cut out the call to the drone for the gyro angle data. Then I found that using the move command with 4 inputs of yaw/throttle/pitch/roll instead of setting those values and then simply doing move() was generally faster too. Not sure if this was me going crazy, my computer/hardware setup, but it works for me.

    I saved enough time each loop that I now even have time to run an open cv2 haar cascade algo on each frame in real time, instead of after landing, but that results in a little lag. It's manageable, and way better than the lag I had earlier that made it impossible to fly/see anything in the camera. Your suggestion of downscaling would help a lot in this regard, I believe, but I'm a little tired today to implement it. Next session!

    I'll take a video a bit later in better lighting, or just an image of the video and post it.

    Happy flying!



  • Here's a screenshot of the output video. Some issues with inputs (why is Yaw -24, i think its an issue with my input controller), and only 6 frames per second (again, downscaling would probably help), but you can see it picks up Matt Damon's face in Ocean's 13 during my lazy sunday night netflix...

    Ziggy the drone has come a along way....6 months ago it was pronounced dead after a repair job gone bad (a motor ripped out by yours truly), yet after a more successful repair job Ziggy made an appearance at a Girls Who Code event in robotics. Today Ziggy takes his first shaky, spiraly because of bad inputs, steps towards sight. Exciting times!!

    0_1571025893198_drone_image_10_13_2019_2.png



  • @opencvfr3ak that is awesome! I too am trying to fine tune the codrone (but each pettern/code execution does not equate to a repeatable pattern) I've been trying to get a drone to take off (never the same twice) make a rectangle pattern and land (where it took off) and have been trying to get the camera to ID where it took off from. I'd be interested in looking at your code if you allow it to see where I've been screwing up. thnx for the heads up on the move() time savings, that should help as I experiment today. I too have seen inconsistencies with the Yaw.
    Ive been trying to gauge with sensor is better, the X/Y of the gyro vs the optical. and both are showing inconsistent (different from each other) executions
    Ultimately, (also new to drones) Id like

    1. follow a pattern based on visual layout on a ground (i.e. the white dashed lines of a road)
      and/or
    2. given an X/Y cord, calculate the vector and fly there. (then do openCV stuff)

    Awesome work, looking forward to seeing what else you are developing. Seeing what you post opens the doors of what I think is actually possible.



  • @opencvfr3ak any chance I could get a look at your code? I tried using your snippet but too many errors without the rest of it to re-invent thnx. I'm also trying to get recognition but not of a face, but of a red spot on the ground and develop a self landing feature (so far the coding components of trying to have it self land based on X/Y coords is failing me...



  • Hey! Sorry for the delay, I've been lazy. Your project sounds interesting! I haven't trained my own model for recognizing bits of an image, but that sounds cool. I think a focus for you will be on reducing features and making sure your model runs pretty quickly!

    Here's the code, its not super cleaned up but hope its helpful for you! As I push forward I may push to this repo, I haven't thought too much about it yet though.
    https://github.com/MZandtheRaspberryPi/im_practical_programming


Log in to reply
 

Looks like your connection to Robolink community was lost, please wait while we try to reconnect.