CMOS
CMOS
Up to 640x480 pixels (VGA)
3.0 m x 3.0 m
1.5 Lux
Up to 30 fps
4-pin Grove interface (I2C)
3.3V to 5V
22mm x 22mm x 12mm
-20C to 70C
5g
Applications
| The M5Stack UnitV AI Camera (OV7740) is suitable for a wide range of IoT applications, including |
Smart home devices
Security and surveillance systems
Robotics and autonomous systems
Industrial inspection and quality control
Wearables and augmented reality devices
Environmental monitoring and tracking systems
M5Stack UnitV AI Camera (OV7740) DocumentationOverviewThe M5Stack UnitV AI Camera (OV7740) is a compact, high-performance camera module designed for AI and computer vision applications. It is equipped with an OV7740 image sensor, which is a 1/4-inch 0.3-megapixel CMOS sensor. The camera module is compatible with the M5Stack development platform, allowing for easy integration with various IoT projects.Technical SpecificationsImage Sensor: OV7740
Resolution: 0.3 megapixel (VGA)
Sensor Size: 1/4 inch
Pixel Size: 3.0 m x 3.0 m
Frame Rate: up to 30 fps
Interface: I2C, SPI
Power Supply: 3.3V
Current Consumption: 150 mA (typical)Code Examples### Example 1: Basic Camera Operations with UIFlowThe following example demonstrates how to use the M5Stack UnitV AI Camera with UIFlow, a visual programming platform for M5Stack devices.
```python
from m5stack import
from m5ui import
import uiflow# Initialize the camera
camera = unitv_camera.init()# Set the camera resolution and format
camera.set_resolution(camera.VGA)
camera.set_format(camera.RGB565)# Take a photo and display it on the M5Stack screen
image = camera.capture()
lcd.image(0, 0, image)# Release the camera
camera.deinit()
```
This example initializes the camera, sets the resolution and format, takes a photo, displays it on the M5Stack screen, and finally releases the camera.### Example 2: Object Detection using TensorFlow Lite and MicroPythonThe following example demonstrates how to use the M5Stack UnitV AI Camera with TensorFlow Lite and MicroPython for object detection.
```python
import uos
import utime
from machine import I2C
import tflite_runtime.micro as tflite# Initialize the camera
camera = unitv_camera.init()# Load the TensorFlow Lite model
model = tflite.load('model.tflite')# Initialize the I2C bus for communication with the camera
i2c = I2C(0, freq=400000)while True:
# Capture an image from the camera
image = camera.capture()# Preprocess the image for object detection
image = image.resize((224, 224))
image = image.convert('RGB')
image = image_to_tensor(image)# Run object detection using the TensorFlow Lite model
outputs = model.invoke(image)# Get the detected objects and their probabilities
objects = get_objects(outputs)# Print the detected objects
for obj in objects:
print(f'Detected object: {obj["label"]} ({obj["probability"]:.2f}%)')# Wait for 1 second before capturing the next image
utime.sleep(1)
```
This example initializes the camera, loads a TensorFlow Lite model for object detection, captures images, preprocesses them, runs object detection using the model, and prints the detected objects and their probabilities.Note: The above examples are for illustrative purposes only and may require modifications to work with your specific use case. Additionally, you may need to install additional libraries or frameworks depending on your development environment.