Home Lepton and Windows › Forums › Raspberry Pi development › Installation and getting started › Access with OpenCV
Tagged: C++, FLIR Lepton, opencv, PURE THERMAL 2, Python, Windows
This topic contains 4 replies, has 5 voices, and was last updated by Hao-Pu 2 months, 1 week ago.
Viewing 5 posts - 1 through 5 (of 5 total)
-
Author
Posts
-
Participant
Has anyone had issues with getting a stream from the Lepton and pure thermal 2 board? I’m using Windows with Python 3.7 and OpenCV 4.1.0 in an Anaconda environment. I’m able to get everything working correctly within the Lepton User App as well. When using the example python code provided with getting started, I’m only able to get a very faint black/gray video stream with what seems to be a couple of rows of dead pixels at the bottom of the stream (these don’t appear with the user app). This same thing occurs with Windows 7 and Windows 10. Doing the same thing in Ubuntu will actually give me a video stream with the default iridium color. I’d like to avoid using Ubuntu for the development work, but if it’s necessary I suppose I’ll have to. I’ve attached images and code to this as well. Any help or suggestions would be appreciated.
# -*- coding: utf-8 -*- """ Created on Tue Jul 16 14:29:25 2019 @author: ashane """ ############################################################################### #MESSING AROUND WITH THE FLIR LEPTON #IMPORTS import cv2 cap = cv2.VideoCapture(1) #resize frame def rescale_frame(frame, percent = 75): width = int(frame.shape[1] * percent/ 100) height = int(frame.shape[0] * percent/100) dim = (width, height) return cv2.resize(frame, dim, interpolation = cv2.INTER_AREA) while(True): # Capture frame-by-frame ret, frame = cap.read() frame = rescale_frame(frame, percent = 500) print(frame.shape) r,g,b = cv2.split(frame) cv2.imshow('red', r) cv2.imshow('green', g) cv2.imshow('blue', b) # Our operations on the frame come here frame_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) frame_v = frame_hsv[:,:,2] #frame_v = cv2.applyColorMap(frame_v, cv2.COLORMAP_HOT) # Display the resulting frame cv2.imshow('frame', frame_v) if cv2.waitKey(25) & 0xFF == ord('q'): break # When everything done, release the capture cap.release() cv2.destroyAllWindows()
ParticipantI have the same error. Help me!!!
ParticipantI just started using PureThermal Mini and FLIR Lepton 2.5 a few days ago.
Also, my native language is Japanese and I am translating it into English by machine translation.
Please forgive me even if there are sentences that are difficult to understand.I solved this problem with C# on Windows.
But I don’t know if this solution is really correct.
Please let us know the evaluation of the participants of this forum.I solved this problem by calling the DLL included in the SDK before openCV.
In C# it would look like the following codevar devices = Lepton.CCI.GetDevices();
var device = devices[0];
var CCIHandle0 = device.Open();After that, confirm that the instance can be obtained with the following code.
Console.WriteLine(device.Name);
Console.WriteLine(CCIHandle0.sys.GetCameraUpTime().ToString());If you can get the instance correctly, use the following code to get the image via openCV.
var img = new OpenCvSharp.Mat();
var camera = new OpenCvSharp.VideoCapture(0) {FrameWidth = 80, FrameHeight = 60, };
camera.Read(img);I don’t know if this method is really correct.
If there is a mistake in my method, please point it out.ParticipantI also solved the problem using the following code.
I solved this problem by calling the DLL included in the SDK before openCV.
In C# it would look like the following codevar devices = Lepton.CCI.GetDevices(); var device = devices[0]; var CCIHandle0 = device.Open(); After that, confirm that the instance can be obtained with the following code. Console.WriteLine(device.Name); Console.WriteLine(CCIHandle0.sys.GetCameraUpTime().ToString()); If you can get the instance correctly, use the following code to get the image via openCV. var img = new OpenCvSharp.Mat(); var camera = new OpenCvSharp.VideoCapture(0) {FrameWidth = 80, FrameHeight = 60, }; camera.Read(img);
OpenCV allows a candid interface to capture live stream with the Camera (webcam).
We need to create an object of VideoCapture class to capture a video. It accepts either a device index or the name of a video file.
A number which is specifying to the Camera is called device index. We can select a Camera bypassing the O or 1 as an argument. After that, we can capture the video frame-by-frame.
ParticipantYou may get the Y16 format. If you get 160×122, it means the Telemetry mode is enabled. Please refer to the following table
Index : 0
Type : Video Capture
Pixel Format: ‘UYVY’
Name : UYVY 4:2:2
Size: Discrete 160×120
Interval: Discrete 0.111s (9.000 fps)Index : 1
Type : Video Capture
Pixel Format: ‘Y16 ‘
Name : 16-bit Greyscale
Size: Discrete 160×120
Interval: Discrete 0.111s (9.000 fps)
Size: Discrete 160×122
Interval: Discrete 0.111s (9.000 fps)Index : 2
Type : Video Capture
Pixel Format: ‘GREY’
Name : 8-bit Greyscale
Size: Discrete 160×120
Interval: Discrete 0.111s (9.000 fps)Index : 3
Type : Video Capture
Pixel Format: ‘RGBP’
Name : 16-bit RGB 5-6-5
Size: Discrete 160×120
Interval: Discrete 0.111s (9.000 fps)Index : 4
Type : Video Capture
Pixel Format: ‘BGR3’
Name : 24-bit BGR 8-8-8
Size: Discrete 160×120
Interval: Discrete 0.111s (9.000 fps) -
Author
Posts
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic.