BH4x3

[STM32 Development Board] FryPi Fried Chicken Pie

 
Overview

FryPi
STM32F411RET6 Development Board


















Introduction:
  This is a mini STM32F411RET6 development board, smaller than the palm of your hand. The core board cost is under 60 RMB. It can be used for AI development, UI development, digital power control, and even for your graduation project and other related projects.


  The initial purpose of creating this board was for my previous smartwatch project, OV-Watch. Many people who replicated it said that many components were very difficult to solder, especially the main controller, which was too small, making secondary development inconvenient. In addition, I also planned to deploy AI-related things on STM32 and create tutorials. Therefore, FryPi was born. This development board is suitable not only for beginners but also for advanced developers. Advanced routines in projects may require a certain level of knowledge.
  The reason for choosing this MCU is that the STM32F411RET6 can perfectly replace the CEU6 in the original smartwatch project, and there is also a hardware option for the F411 in Simulink.


Features:

The MCU uses an STM32F411RET6, a Cortex-M4 core with DSP and FPU, 512 Kbytes of Flash memory, a 100 MHz CPU, and an ART Accelerator.
An external SPI Flash can be soldered on.
Abundant example programs are available, including advanced examples such as smartwatch, thermal imaging gesture recognition, handwritten digit recognition, MATLAB co-development, Simulink-in-the-loop development, etc.
Ports are provided for connecting external expansion boards (e.g., the animated demonstration at the top shows a CAM expansion board plugged into the Core board).
Both dual-TypeC and single-TypeC versions of the Core board are available.
The FryPi pin map is shown below; currently, only some I/O ports are used for peripherals such as LCDs and touchscreens.



Hardware Description: The current hardware engineering version is V1.1, consisting of one Core board, one single-TypeC Core board, one CAM expansion board, and one OV2640 camera module, which can be purchased online. The 3D schematic of the Core board is shown below. We hope you will feel free to explore and create expansion boards, such as a sensor expansion board, or directly plug it into a car.


The software documentation includes the following example demos, which are presented in the directory below. The demos are plentiful, divided into Basic and Advanced examples. For details, please click the link or view them directly in the GitHub repository.

Basic
0.template
1.GPIO
2.USART
3.TIM
4.PWM
5.ADC
6.SPI
7.LCD
...todo


Advanced
0.FreeRTOS template
1.How to use CubeAI
2.Handwritten digit recognition
3.Thermal imaging gesture recognition
4.Using VScode EIDE plugin
5.Simulink in-the-loop development
6.LVGL smartwatch
7.OV2640 camera + recognition
...todo



On-machine testing
  After the development board is soldered and you receive it, burn the template routines and observe the phenomenon. If there is no problem with the hardware, then when you plug the serial port into the corresponding Type-C port to connect to the computer, L2 will flash. Press the key to switch modes, and the flashing frequency of L2 will change. At the same time, the host computer will receive the mode information.


  If you plug the development board into the USB corresponding Type-C port to connect to the computer, the computer will prompt you to format the USB drive. After formatting, it will simulate a USB drive.


This section provides a partial example
  using thermal imaging gesture recognition as a demo project (for other detailed examples, please refer to GitHub):
I. Folder Structure
├─python_codes
│ │ data_2_imgfile.py
│ │ data_get.py
│ │ data_show.py
│ │ gesture.h5
│ │ test.py
│ │ train.py
│ │
│ └─camera_data
│ test_data.npz
│ train_data.npz

└─stm32_codes
├─ThermalCamera_data_send
├─Thermalgesture

  The folder structure is roughly as shown above. `python_codes` contains code for acquiring data from the lower-level machine, network training, and testing. The collected thermal imaging data is saved in .npz format files for network training and testing, while the model is saved in .h5 format. `stm32_codes` stores STM32 code. The first part contains the raw thermal imaging data obtained by LCD screen refresh and serial port transmission of fried chicken pie. The second part contains the already deployed thermal imaging gesture recognition code.

II. Thermal imaging streaming to the screen
  directly uses the provided MLX90640 API, with corresponding changes to the I/O ports. See the code for more detailed information. To speed up screen refresh, the `drawPicture(void)` function in `mlx90640_display_process(void)` uses a buffer for screen refresh, as dot-mapping refresh is too slow. See the code for details.


III. Convolutional Neural Network Setup, Training, and Deployment
1. CNN Setup and Training
  The convolutional neural network setup is shown in the figure below. The training set contains approximately 4000 image arrays. `num_epochs` is set to 50, `batch_size` is set to 64, and the Adam optimizer's `learning_rate` is set to 0.01. The final Acc can exceed 0.9. See `./python_codes/train.py` for detailed network setup.


  The following shows how to build a CNN model using Keras. For other related code, please refer to the Python code.
#------------------------------【Building the Model】---------------------------------
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=5, kernel_size=(5, 5), padding='valid', activation=tf.nn.relu, input_shape=(24, 32, 1)),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), padding='same'),
tf.keras.layers.Conv2D(filters=3, kernel_size=(3, 3), padding='valid', activation=tf.nn.relu, input_shape=(10, 14, 5)),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), padding='same'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation=tf.nn.relu),
tf.keras.layers.Dense(units=16, activation=tf.nn.relu),
tf.keras.layers.Dense(units=6, activation=tf.nn.softmax)
])

model.summary()
2. Deploying the CNN to STM32
  First, after generating code for the trained gesture.h5 model using the CubeMX AI tool, call the API. Here, the main function used is ai_mnetwork_run(), specifically the function user_ai_run(const ai_float *in_data, ai_float *out_data) in the following code block. Since CubeMX selects systemperform for code generation, MX_X_CUBE_AI_Process() should be commented out. This is mainly used to test the model's performance, and it uses random numbers to fill the input by default.
uint8_t user_ai_run(const ai_float *in_data, ai_float *out_data)
{
int idx = 0;
int batch = 0;
ai_buffer ai_input[AI_MNETWORK_IN_NUM];
ai_buffer ai_output[AI_MNETWORK_OUT_NUM];

ai_float input[1] = {0};
ai_float output[1] = {0};

if (net_exec_ctx[idx].handle == AI_HANDLE_NULL)
{
printf("E: network handle is NULL
");
return -1;
}

ai_input[0] = net_exec_ctx[idx].report.inputs[0];
ai_output[0] = net_exec_ctx[idx].report.outputs[0];

ai_input[0].data = AI_HANDLE_PTR(in_data);
ai_output[0].data = AI_HANDLE_PTR(out_data);

batch = ai_mnetwork_run(net_exec_ctx[idx].handle, ai_input, ai_output);
if (batch != 1) {
aiLogErr(ai_mnetwork_get_error(net_exec_ctx[idx].handle),
"ai_mnetwork_run");
return -2;
}

return 0;

}
  Note that the input data I provided here has been normalized, as shown in the following code.
static void normalizeArray()
{
float range = maxTemp - minTemp;
for(uint16_t i=0; i<24*32; i++) {
normalizetempValues[i] = (tempValues[i] - minTemp) / range;
}
}
  The output data format is an array of length 6, corresponding to no gesture and gestures 1~5 respectively. The category judgment condition is to select the largest number, that is, to judge which number is the largest, and the corresponding gesture is judged as the current gesture. The following code finds the index corresponding to the largest number.
static uint8_t findMaxIndex(float arr[], uint8_t size) {
if (size <= 0) {
// Handle the case of empty array or invalid size
return -1;
}

uint8_t maxIndex = 0; // Assume that the index of the maximum value is the index of the first element

for (int i = 1; i < size; ++i) {
// Check if the current element is greater than the current maximum value
if (arr[i] > arr[maxIndex]) {
maxIndex = i; // Update the index of the maximum value
}
}

return maxIndex;
}
  Finally, the entire recognition process is given as follows.
normalizeArray();// Normalize first
if(user_ai_run(normalizetempValues, outputs))// Substitute the normalized data and perform forward calculation
{
printf("
run error
");
}
uint8_t temp = findMaxIndex(outputs, sizeof(outputs) / sizeof(outputs[0]));// Find the index of the maximum value
printf("
predict gesture:%d
", temp);// Print the judged gesture
  Of course, the above is just one of the demos. For more basic and advanced routine demos, please refer to the GitHub repository.
Purchase List: 1. Touchscreen Model: Search P169H002-CTP on Taobao.


2. STM32F411RET6: I usually buy from Youxin Electronics.
3. Others: For convenience, I usually buy other resistors and capacitors directly from LCSC.
Demo Video
(Bilibili Link): https://www.bilibili.com/video/BV1u2421F7kf/
Code Repository
(GitHub): https://github.com/No-Chicken/FryPi.
Gitee: https://gitee.com/kingham/FryPi.
There are many demo projects in the software section on GitHub, and I've written detailed descriptions of them, so I suggest you check the code and demo tutorials directly on GitHub. If you can't open the repository, you can find a GitHub mirror online, use a VPN, or search for me on Gitee.
Tutorial
website: https://no-chicken.xyz
Basic tutorial videos: https://www.bilibili.com/video/BV1Gy411e7EH
Advanced tutorial videos: https://www.bilibili.com/video/BV1nw4m1e7TJ
QQ groups :
Group 1: 572216445
Group 2: 912218004
参考设计图片
×
 
 
Search Datasheet?

Supported by EEWorld Datasheet

Forum More
Update:2026-03-26 07:02:13
  • Really hope to understand nRF2401 and wireless serial port
  • The first CC3200WIFI network communication program automatically synchronizes time
  • Help: Access control system information
  • A newbie question about ARM assembly?
  • About formatted output
  • Description of this section

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
community

Robot
development
community

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号