Botmation's personal blog for creating robots and ai https://botmation.net/ Botmation's Blog Sat, 27 May 2023 11:22:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Nurturing Creative Minds: Teaching STEM to Kids in the Era of Generative AI https://botmation.net/2023/05/27/nurturing-creative-minds-teaching-stem-to-kids-in-the-era-of-generative-ai/ https://botmation.net/2023/05/27/nurturing-creative-minds-teaching-stem-to-kids-in-the-era-of-generative-ai/#respond Sat, 27 May 2023 11:22:54 +0000 https://botmation.net/?p=336 In this rapidly advancing digital age, the transformative power of Artificial Intelligence (AI) is becoming increasingly evident in all aspects of our lives. From autonomous vehicles to smart homes, AI has undoubtedly revolutionized the world around us. As we prepare Continue reading Nurturing Creative Minds: Teaching STEM to Kids in the Era of Generative AI

The post Nurturing Creative Minds: Teaching STEM to Kids in the Era of Generative AI appeared first on Botmation's personal blog for creating robots and ai.

]]>
In this rapidly advancing digital age, the transformative power of Artificial Intelligence (AI) is becoming increasingly evident in all aspects of our lives. From autonomous vehicles to smart homes, AI has undoubtedly revolutionized the world around us. As we prepare future generations for this AI-driven world, it is crucial to empower children with the skills and mindset necessary to navigate and harness the full potential of this technology. Teaching Science, Technology, Engineering, and Mathematics (STEM) to kids has always been important, but in the new age of generative AI, fostering creativity becomes even more critical. This blog explores the changing landscape of STEM education and the role of creativity in maximizing the benefits of AI for young learners.

The Evolving Nature of STEM Education: STEM education has long focused on equipping students with the foundational knowledge and problem-solving skills required to excel in the fields of science and technology. However, with the advent of generative AI, the educational landscape is evolving rapidly. AI systems are increasingly capable of automating routine tasks, data analysis, and even generating novel content. As a result, STEM education must adapt to prepare children for a future where AI plays a prominent role.

Creativity:

While AI excels at processing vast amounts of data and executing complex tasks, it still lacks the creative and imaginative capabilities inherent in human beings. As educators, we must emphasize the development of creative thinking skills in tandem with STEM education to ensure that children can leverage AI as a powerful tool rather than being replaced by it. Much like how computers changed how we use paper for manual draft to being digital.

  1. Encouraging Problem Solving and Critical Thinking: STEM education should emphasize problem-solving techniques that go beyond the conventional approaches. By integrating AI-related challenges and projects into the curriculum, children can explore innovative ways to leverage AI to solve real-world problems. Encouraging critical thinking and providing open-ended questions foster creativity by allowing children to think beyond the boundaries of existing AI algorithms.
  2. Cultivating Collaboration and Interdisciplinary Learning: The future workforce will require individuals who can collaborate across different disciplines and work effectively with AI technologies. STEM education must incorporate interdisciplinary projects that encourage children to collaborate, exchange ideas, and work together to solve complex problems. By promoting teamwork and interdisciplinary thinking, we can nurture creativity and enable students to harness AI’s power across diverse domains.
  3. Emphasizing Ethical Considerations and Human Values: As AI continues to advance, ethical considerations become increasingly crucial. Children should be taught the importance of using AI responsibly and the potential consequences of its misuse. Discussing topics such as privacy, bias, and fairness helps cultivate a sense of responsibility in students and encourages them to think critically about the impact of AI on society. By incorporating ethics into STEM education, we can shape future innovators who prioritize the well-being of humanity.
  4. Embracing AI as a Creative Tool: Rather than fearing the rise of generative AI, educators should embrace it as a powerful creative tool. AI can assist children in generating ideas, exploring possibilities, and unleashing their creativity in various domains. By introducing AI-driven tools like generative art, music composition, or storytelling, we empower students to explore their own creativity while leveraging the capabilities of AI. This approach not only demystifies AI but also demonstrates its potential as a collaborative partner in the creative process.

In the age of generative AI, teaching STEM to kids demands a shift towards fostering creativity alongside traditional technical skills. By emphasizing problem-solving, interdisciplinary learning, ethical considerations, and embracing AI as a creative tool, educators can empower young learners to harness AI’s potential while maintaining their uniquely human qualities. Equipped with both technical expertise and creative thinking, the next generation will be well-prepared to focus on ideas and problem solving and letting AI work on the task of building.

The post Nurturing Creative Minds: Teaching STEM to Kids in the Era of Generative AI appeared first on Botmation's personal blog for creating robots and ai.

]]>
https://botmation.net/2023/05/27/nurturing-creative-minds-teaching-stem-to-kids-in-the-era-of-generative-ai/feed/ 0 336
My New Assistant ChatGPT https://botmation.net/2023/03/04/my-new-assistant-chatgpt/ https://botmation.net/2023/03/04/my-new-assistant-chatgpt/#respond Sat, 04 Mar 2023 18:51:54 +0000 https://botmation.net/?p=329 I have been playing around with ChatGPT for a while now. Playing around with different prompts and example I have seen on the web. One thing that it does great compare to the existing assitants suchs as Alexa or Google Continue reading My New Assistant ChatGPT

The post My New Assistant ChatGPT appeared first on Botmation's personal blog for creating robots and ai.

]]>
I have been playing around with ChatGPT for a while now. Playing around with different prompts and example I have seen on the web. One thing that it does great compare to the existing assitants suchs as Alexa or Google is the conversational flow. This assistant really feels like you are talking to someone.

From this flow we can use ChatGPT as a low cost instructor for simple applications. 

I found it great for generating ideas or variations of topics. I would ask it for project ideas. Then do a follow up question to elaborate on the topic you are interested in. 

Lets ask ChatGPT to give me raspberry PI project ideas.

Can you generate raspberry pi project ideas using LED lights?

ChatGPT reponse

Sure, here are some Raspberry Pi project ideas using LED lights:

1 LED Traffic Light Simulator: Build a traffic light simulator using LEDs and the Raspberry Pi. This project can help you learn about programming GPIO pins and timing with Python. You can also add a button to simulate a pedestrian crossing button.

2 LED Matrix Display: Create an LED matrix display using Raspberry Pi and LED strips. You can display scrolling text, animations, and even create a game using the LED matrix display.

3 LED Mood Lamp: Build an LED mood lamp that changes colors based on ambient light or sound. You can use a microphone sensor to detect sound levels and change the LED color accordingly.

4 LED Cube: Create a 3D LED cube that displays various patterns and animations. This project can help you learn about programming 3D animations and graphics.

5 LED Music Visualizer: Build a music visualizer using LED lights and Raspberry Pi. This project can help you learn about audio processing and frequency analysis.

6 LED Night Light: Create a LED night light that turns on automatically in the dark using a light sensor. You can also add a button to change the color of the LED.

7 LED Clock: Build an LED clock that displays the time using LED lights. You can also add temperature and humidity sensors to display additional information.

8 LED Temperature Display: Create an LED temperature display that shows the current temperature using different colors. This project can help you learn about programming temperature sensors and using LED lights to display information.

9 LED Christmas Tree: Build an LED Christmas tree that can be controlled using Raspberry Pi. You can create various lighting patterns and even add music to the tree.

10 LED Aquarium Lighting: Create an LED lighting system for your aquarium using Raspberry Pi. You can program different lighting cycles and colors to simulate different times of the day and help your fish thrive.

Can you provide example code for item 5?

Sure, here's an example Python code for a simple LED music visualizer using a Raspberry Pi and an LED strip:

import alsaaudio as aa
import audioop
import time
from neopixel import *

# LED strip configuration
LED_COUNT = 60       # Number of LED pixels
LED_PIN = 18         # GPIO pin connected to the pixels
LED_FREQ_HZ = 800000 # LED signal frequency in hertz
LED_DMA = 10         # DMA channel to use for generating signal
LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False   # True to invert the signal (when using NPN transistor level shift)

# Create LED strip object
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS)
strip.begin()

# Initialize audio input
input = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NONBLOCK, 'hw:1')
input.setchannels(1)
input.setrate(44100)
input.setformat(aa.PCM_FORMAT_S16_LE)
input.setperiodsize(1024)

# Define color gradient for visualizer
colors = [(0, 0, 255), (0, 255, 0), (255, 255, 0), (255, 0, 0)]

# Define function to set LED colors based on sound level
def set_leds(level):
    # Calculate color index based on sound level
    color_index = int(level / 16384.0 * len(colors))
    if color_index >= len(colors):
        color_index = len(colors) - 1
    # Set LED colors
    for i in range(LED_COUNT):
        color = colors[color_index]
        strip.setPixelColor(i, Color(color[1], color[0], color[2]))
    strip.show()

# Main loop
while True:
    # Read audio data from input
    l, data = input.read()
    if l:
        # Calculate sound level
        level = audioop.max(data, 2)
        # Set LED colors based on sound level
        set_leds(level)
    # Sleep for a short time
    time.sleep(0.01)

Can you add a function to the code to change LED colors to all blue for 4 seconds?

Sure, here's an updated version of the code that adds a change_color() function to change the LED colors to all blue for 4 seconds:

import alsaaudio as aa
import audioop
import time
from neopixel import *

# LED strip configuration
LED_COUNT = 60       # Number of LED pixels
LED_PIN = 18         # GPIO pin connected to the pixels
LED_FREQ_HZ = 800000 # LED signal frequency in hertz
LED_DMA = 10         # DMA channel to use for generating signal
LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False   # True to invert the signal (when using NPN transistor level shift)

# Create LED strip object
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS)
strip.begin()

# Initialize audio input
input = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NONBLOCK, 'hw:1')
input.setchannels(1)
input.setrate(44100)
input.setformat(aa.PCM_FORMAT_S16_LE)
input.setperiodsize(1024)

# Define color gradient for visualizer
colors = [(0, 0, 255), (0, 255, 0), (255, 255, 0), (255, 0, 0)]

# Define function to set LED colors based on sound level
def set_leds(level):
    # Calculate color index based on sound level
    color_index = int(level / 16384.0 * len(colors))
    if color_index >= len(colors):
        color_index = len(colors) - 1
    # Set LED colors
    for i in range(LED_COUNT):
        color = colors[color_index]
        strip.setPixelColor(i, Color(color[1], color[0], color[2]))
    strip.show()

# Define function to change LED colors to all blue for 4 seconds
def change_color():
    for i in range(LED_COUNT):
        strip.setPixelColor(i, Color(0, 0, 255))
    strip.show()
    time.sleep(4)
    set_leds(0)

# Main loop
while True:
    # Read audio data from input
    l, data = input.read()
    if l:
        # Calculate sound level
        level = audioop.max(data, 2)
        # Set LED colors based on sound level
        set_leds(level)
    else:
        # Change LED colors to all blue for 4 seconds
        change_color()
    # Sleep for a short time
    time.sleep(0.01)

Which GPIO pins do I need to wire the LED?

In the code I provided earlier, the LED strip is connected to GPIO pin 18. However, the specific GPIO pins you will need to wire the LED will depend on the type of LED you are using and how you have it wired.

If you are using a single LED, you will typically need to connect one leg of the LED to a GPIO pin and the other leg to a ground pin. The GPIO pin you use will depend on which GPIO pins are available on your Raspberry Pi and which ones you have available to use.

If you are using an LED strip, you will typically need to connect the data input pin of the LED strip to a GPIO pin on the Raspberry Pi, and also connect the LED strip's power and ground pins to a power source and ground pin, respectively.

You should refer to the documentation or specifications for your specific LED or LED strip to determine which pins you need to connect to your Raspberry Pi. Additionally, it's important to make sure you use the appropriate voltage and current to power your LED, as exceeding the limits can damage the LED or your Raspberry Pi.

My Thoughts and take aways

Just from a quick conversation with chatGPT I was able to generate ideas I can pick from. Then I was able to have it provide me a quick template for me to jump right to it.

I even ask where to wire a LED and it remembers which GPIO pin I needed to connect to. This helps me out a lot when I am busy doing many things everyday. When I do find time to do a bit of fun, I like to just jump right into it. ChatGPT has shown it can help me run out of the gate.

I hope you all enjoyed my quick demo and feel the excitement for this new technology as much as I do.

Tips to generate ideas with ChatGPT

  • What are some cool projects I can do with raspberry pi and a camera?
  • How can I use raspberry pi to control a robot arm?
  • How do I program mqtt client and server?

Phrase your question in a clear and specific way. ChatGPT will try to answer your question by generating a paragraph of text that describes one or more possible project ideas. You can also ask follow-up questions to get more details or clarification from ChatGPT.

  • Can you tell me more about mqtt client?
  • Can you add a LED variable to the mqtt server?
  • Generate more ideas from the second bullet.

The post My New Assistant ChatGPT appeared first on Botmation's personal blog for creating robots and ai.

]]>
https://botmation.net/2023/03/04/my-new-assistant-chatgpt/feed/ 0 329
DALL-E Initial Impression https://botmation.net/2022/10/01/dall-e-initial-impression/ https://botmation.net/2022/10/01/dall-e-initial-impression/#respond Sat, 01 Oct 2022 12:26:28 +0000 http://www.botmation.net/?p=263 The super cool artificial text to image generator that took over the internet has just been open to the general public on September 28. I am here to start using it for the first time and blog about my general Continue reading DALL-E Initial Impression

The post DALL-E Initial Impression appeared first on Botmation's personal blog for creating robots and ai.

]]>
The super cool artificial text to image generator that took over the internet has just been open to the general public on September 28. I am here to start using it for the first time and blog about my general first impression of it.

First we go to the post by DALL-E and see how we can sign up.

DALLĀ·E Now Available Without Waitlist (openai.com)

Signing up was pretty straight forward. Click on sign up and create a quick account.

Then I get to a page where I it tells me to just enter a very detailed sentence to get the best results. Below are the images and the text I used. Let see close to meeting the hype.

Generating Images

6 dogs playing poker and wearing steak hats.

A banana bed in a fantasy world.

A picture of yourself with a subtitle to give a world a message you want us to know.

A robot painting a artwork of a man eating sushi.

2 androids creating a new version of themselves in their secret lair.

A corvette F-35 fighter jet flying out of a fantasy dragon in mario world.

super saiyan mario

My initial impression

Based on the articles I have read about DALL-E earlier in the year, the results are disappointing. It looks like the results are definitely random and you may have to use the same text over a few dozen times before you get an image of what you want. This is probably why DALL-E opted for a pay per click subscription.

I would say this would be great tool to teach kids about how machine can take your input and try its very best to understand what you want. Also that depending on what you are trying to make it paint is highly dependent on the image data sets it was given.

Definitely a neat tool, but I would not rely on it to help you create consistent artwork over and over again. It would work if you were looking for a general concept and you don’t mind spending a day with trial and error. Which may still be faster than trying to draw these images yourself.

The post DALL-E Initial Impression appeared first on Botmation's personal blog for creating robots and ai.

]]>
https://botmation.net/2022/10/01/dall-e-initial-impression/feed/ 0 263
Making the Humanoid Robot Part 2 https://botmation.net/2020/06/14/making-the-humanoid-robot-part-2/ Sun, 14 Jun 2020 20:07:53 +0000 http://www.botmation.net/?p=79 A lot has happened since part 1 of this project. A few months have gone by quickly. I made a lot of progress making the robot. During my time away from society, constructing a humanoid companion seems kind of fitting. Continue reading Making the Humanoid Robot Part 2

The post Making the Humanoid Robot Part 2 appeared first on Botmation's personal blog for creating robots and ai.

]]>
A lot has happened since part 1 of this project. A few months have gone by quickly. I made a lot of progress making the robot. During my time away from society, constructing a humanoid companion seems kind of fitting. The project is taking a lot longer than I expected, but it is moving along quite nicely.

Sensors

I looked into what would be the essential parts my new robot will need in order to learn balancing and move around. I decided I will need a gyroscope with accelerometer. The gyroscope will help provide orientation while the accelerometer will provide momentum. Knowing momentum will be used to determine how strong of a correction the robot will need to make when it sense a change in movement.

Frame

Next I would need to construct a lightweight frame. I spent a lot of time in this area. There are so many ways I could try to make it. I went with making the frame out of aluminum bars. This provided a cheap, lightweight, strong, and easy to work with material. Instead of actually engineering the frame, I can sketch up a concept and make the parts as I go.

The overall concept was modeled after a robot toy. I looked at each body part and the relative ratio to one another. I then thought about the amount of freedom each joint would need. During my research I saw others do as little as 16 degrees of freedom (DOF) to 28. I decided to start with 20 DOF for this robot. Some joints have 1 degree of motion like the knee and others have 3 degrees like the hip. I will add more later as I the robot progresses.

The hardest part of making the frame was linking multiple servos together. In the image above you can see the servos come with gear on one side. To make a stable connection each servo need a frame. The frame creates a second connection point on the back side of the servo. Below you can see the knee joint servo with a frame and leg extension attached.

The knee joint is relatively easy to make, but when I got to joints with 2 or 3 DOF. It got a lot more complicated. To keep the joint size as small as possible multiple servos will need to be combined into one part. Below is how I combined 2 and created a 2 DOF. Then when adding a third became nearly impossible. To make it work I had to attach it to the end of one. This depended on how I wanted the joint to function and where it made the most sense for this third axis.

Then comes the feet. I decided to go with wood on this, since it was easier to attach wood than created another custom metal frame. It ended up looking more like sandals than feet.

Putting it all together was the fun part. Took quite a bit of time to assemble and adjust things as needed. Yes the the robot is still missing a head and has no spine movement. Those would be future addition. The next part of this series will be the electronic portion. Where I will show how I wired up all the servos and control system. I hope the servos will be able to handle the weight of this design. If not then I am going to need bigger servos!

I hope you all look forward to my finished project. My goal is to have the robot learn to stand up and walk on its own. Then progressively add interactive features to it.

Resources

Servo

Gyroscope + Accelerometer

Parts

The post Making the Humanoid Robot Part 2 appeared first on Botmation's personal blog for creating robots and ai.

]]>
79
Giving Back $2 Trillion COVID-19 Stimulus https://botmation.net/2020/04/11/covid-19-2-trillion-stimulus/ https://botmation.net/2020/04/11/covid-19-2-trillion-stimulus/#respond Sat, 11 Apr 2020 20:21:00 +0000 http://www.botmation.net/?p=100 Today is not my typical post or anything related to my blog, but I wanted to share my thoughts and what I am doing to help. I just received the stimulus money today. It feels good to receive money out Continue reading Giving Back $2 Trillion COVID-19 Stimulus

The post Giving Back $2 Trillion COVID-19 Stimulus appeared first on Botmation's personal blog for creating robots and ai.

]]>
Today is not my typical post or anything related to my blog, but I wanted to share my thoughts and what I am doing to help.

I just received the stimulus money today. It feels good to receive money out of nowhere. However, I am lucky to be working during this historic time.
Currently there is an enormous amount of fellow Americans out of jobs. The weekly unemployment reached over 6 million new applications two weeks in a row. With many more in the coming months with the potential to raise unemployment to 20%. Yes, this is a huge! To make matters worse, many companies are trying to conserve money and stopped hiring. So people without jobs will not be able to get one for months.

This stimulus is for those hardest hit by this pandemic. Since there is no financial impact to me by the pandemic, I donated my entire stimulus to the local food banks and other places that will help those in need.

If haven’t heard about the stimulus. It is a 2 trillion dollar stimulus for small businesses and individuals. Not everyone is will be getting money though. There are income limits based on adjusted income when you filled 2018 or 2019 taxes.

I ran some math on the numbers for a rough estimate purposes. There are roughly 130 million Americans in the job market today. The projected unemployment is 15 to 20% from some financial articles I have read (Number is changing weekly). Lets say of those unemployed got the full $1,200 one time deposit. Using high end of 20% unemployment to calculate how much money is going to them. The amount comes to $31.2 Billion. The Washington post believes 80% of American adults will get the stimulus. Lets use this as another rough number to calculate the amount of money for people who are still employed. That would be 60% x 130 x 1,200 = $93.6 Billion. That is 3 times what the unemployed is getting. If all those employed do not have financial hardship during this time, that money could be given to those in need. Which will help stretch out the relief needed such as food.

Now I know $1,200 is not a lot for anyone struggling financially, but lets just say it is enough to cover basic necessities for 1 month. For those who do not need this stimulus can give to charities. This will potentially give them some additional relief up to 3 more months. Which hopefully this pandemic will end by then.

I like to encourage all those like me who received stimulus and do not need the money to give to charities for those in need during these trying times. My goal is to get as much people to do the same that the charities will be reporting record surpluses. Enough to reassure the public that even in these dark times we got each other’s back. Everything will be ok. Together we can make best use of this stimulus and help others.

Please share! Stay healthy!

Additional Resources

https://www.irs.gov/coronavirus/non-filers-enter-payment-info-here

https://www.washingtonpost.com/graphics/business/coronavirus-stimulus-check-calculator/

The post Giving Back $2 Trillion COVID-19 Stimulus appeared first on Botmation's personal blog for creating robots and ai.

]]>
https://botmation.net/2020/04/11/covid-19-2-trillion-stimulus/feed/ 0 100
Making a Humanoid Robot Part 1 https://botmation.net/2020/01/28/making-a-humanoid-robot-part-1/ https://botmation.net/2020/01/28/making-a-humanoid-robot-part-1/#respond Tue, 28 Jan 2020 03:39:56 +0000 http://www.botmation.net/?p=76 Back Story Hello, After the long holiday session I am finally back on track to start my next project. I think it’s about time I try tackling humanoid robot making. I been bouncing ideas with my kid. He originally wanted Continue reading Making a Humanoid Robot Part 1

The post Making a Humanoid Robot Part 1 appeared first on Botmation's personal blog for creating robots and ai.

]]>
Back Story

Hello,

After the long holiday session I am finally back on track to start my next project. I think it’s about time I try tackling humanoid robot making. I been bouncing ideas with my kid. He originally wanted a robot dog which can play, talk, and clean up his toys. After a few weeks designing out the details then he comes back with, let’s make a transformer robot. I of course said well that sounds even more awesome than the last!


I was reading up the latest from CES this year and came upon a kick starter company that made just that. The robot is called T9 by Robosen and it was originally suppose to be Optimus Prime, but I assume they didn’t want to pay royalties to Hasbro and ended up with a generic truck look. However, the functions are still there. After seeing this I starting going down the youtube rabbit hole and ended up watching humanoid robot competitions.

This is when I learned that we have come a long way from the early walking robots. I have never built a humanoid robot before, but after seeing how so many people built really agile robots I am inspired to try it for myself.
Having my own humanoid robot is one of my dreams. All sorts of ideas is flowing through my mind. I can make the robot talk, walk around the house, maybe clean up toys, and put away the groceries.
Of course all that will take a lot of time and development. Which I do not have too much of with a working full time and raising a kid. So I plan to make this a long series, but great mini stories as I go along.

Planning

To build a humanoid robot from scratch requires a ton of up front planning and research. Some things I ask myself are:

How big is this robot?
What kind of parts do I need?
How much are the parts?

How are the joints connected?
How am I going to program it?
What are the features I need it to function?
Are there a pre-made kits or instructions?
What are the current techniques for training the robot?
How do I make the robot walk?

First I looked for existing works with guides to jump start my project. I found one project with some details of how it was built. This build had 17 servos controlling multiple joints. Each servo contributes to what is referred as degree of freedom (DOF) for short. The robot also had two other key components, a gyroscope accelerometer and servo driver. The gyroscope and accelerometer will give the robot a sense of balance and force. The server driver allows multiple servos be controlled from a single interface to something like a raspberry pi.

My next steps which I will talk about in the next post, will be the design of the robot. Such as where are my joints going to be and what kind of frame will I use to put all the servos together.

The post Making a Humanoid Robot Part 1 appeared first on Botmation's personal blog for creating robots and ai.

]]>
https://botmation.net/2020/01/28/making-a-humanoid-robot-part-1/feed/ 0 76
Talking Thirsty Plant https://botmation.net/2019/06/14/talking-thirsty-plant/ Fri, 14 Jun 2019 19:00:00 +0000 http://box5269.temp.domains/~botmatio/?p=42 Do you or someone you know always water the plant after it starts to look very sad? Well this next project I am working on will give the plants an opportunity to speak up before it happens. Thinking through the Continue reading Talking Thirsty Plant

The post Talking Thirsty Plant appeared first on Botmation's personal blog for creating robots and ai.

]]>
Do you or someone you know always water the plant after it starts to look very sad? Well this next project I am working on will give the plants an opportunity to speak up before it happens.

Thinking through the idea I want the plant to be able to say something to potential residents to know the moisture level is getting low. The plant could say things like “Hello, Yo, I need water, or Ahem”

Well this sounds annoying you say? well how about adding a feature to only go off during certain times of the day? Better yet lets add facial recognition to only bug the person responsible for watering the plant.

This idea is a great spring time activity to get your futuristic talking garden you always wanted.

I have uploaded the video to my youtube page.

For this project you will need the following.

  • Speaker

Optionally for facial recognition you will need the following

  • Jetson Nano

STEP 1 Wiring Pi

Solder the pins to the ADS1115 board and connect to raspberry pi board on the following pins.

VDD – (Rpi 3.3v)

GND – (Rpi Ground)

SCL – (Rpi pin 3)

SDA – (Rpi pin 2)

Next wire the moisture sensor

VCC – (Rpi 3.3v)

GND – (Rpi Ground)

Aout – (ADS115 A0)

Once the wiring is complete go ahead and hook up the usb audio adapter and speaker.

STEP 2 Run the code

Power up the raspberry pi.

Go to my github page and get the latest files for this project. Download the Plant.py file for the raspberry pi.

https://github.com/Botmation

You should now see current reading from the sensor. In the air you may see values above 1200. If you dip the sensor into water you will see values below 600.

The code also includes mqtt function which listens for alerts from the nano. This is optional and won’t interfere if you do not have the nano.

Facial Recognition with Nano

If you like to use the facial recognition feature download the nano.py from github site.

For initial setup of the Nano please watch my video which walks you through the setup and training the program to recognize your face.

Once you train the program you can then run the Nano.py using the trained file as reference.

You will need to edit the code and update the network IP address to point to your raspberry pi.

The post Talking Thirsty Plant appeared first on Botmation's personal blog for creating robots and ai.

]]>
154
OpenCV 4.1 Contrib Lib on Jetson Nano Quick Start Guide with Facial Recognition https://botmation.net/2019/06/01/opencv-4-1-contrib-lib-on-jetson-nano-quick-start-guide-with-facial-recognition/ Sat, 01 Jun 2019 19:34:00 +0000 http://box5269.temp.domains/~botmatio/?p=45 This is a quick guide to get you started on the Nano as quickly as possible. Roughly a few hours in total. The guide will highlight major steps only and is intended for the intermediate users who just want the Continue reading OpenCV 4.1 Contrib Lib on Jetson Nano Quick Start Guide with Facial Recognition

The post OpenCV 4.1 Contrib Lib on Jetson Nano Quick Start Guide with Facial Recognition appeared first on Botmation's personal blog for creating robots and ai.

]]>
This is a quick guide to get you started on the Nano as quickly as possible. Roughly a few hours in total. The guide will highlight major steps only and is intended for the intermediate users who just want the steps and links.

Video https://youtu.be/PttoKt6TMDk

GOAL: Running facial recognition in OpenCV 4.1 with Contrib lib over python using Nano and camera.

Disclaimer: This will uninstall OpenCV 3.3 that is preloaded on Jetpack for Nano and upgrade it to OpenCV 4.1. Worst case is it will break OpenCV and you will have to reload image. If you do not want facial recognition or other libraries in the Contrib build then you do not need to upgrade to OpenCV 4.1.


What you will need.

Preparing MicroSD card You will need to prepare the SD card using software from this link. https://www.sdcard.org/downloads/formatter/ Download and install the software. Then perform a quick format of the SD card with no name.
Install the Jetson Image Download the image from Nvidia web site. https://developer.nvidia.com/embedded/dlc/jetson-nano-dev-kit-sd-card-image
Download the image writing software and install on your computer. https://www.balena.io/etcher/
Use the Etcher software and load the image to the SD card. No need to unzip Jetson image.
Connect Camera to Nano Now connect the Raspberry Pi camera to the Nano.
Starting up Nano Insert the SD card into the Nano. Set the jumper on the Nano to use 5V power supply rather than microSD.

Connect Monitor, mouse, and keyboard. Connect power supply to Nano and power it on.
Create user name and password.

Increase System Memory
In order to install OpenCV 4.1 on Nano we need roughly 4Gb of additional memory. Otherwise the program will crash. Run the code below. Ensure you are using at least a 32 Gb SD card.

sudo fallocate -l 4.0G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Uninstalling OpenCV

sudo apt-get purge libopencv*

Installing OpenCV 4.1
Open Terminal window and browse into a folder where you want OpenCV to download and compile. Then run the following code.

sudo apt-get update
sudo apt-get install -y build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt-get install -y python2.7-dev python3.6-dev python-dev python-numpy python3-numpy
sudo apt-get install -y libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
sudo apt-get install -y libv4l-dev v4l-utils qv4l2 v4l2ucp
sudo apt-get install -y curl
sudo apt install -y python3-pip
sudo apt-get update

wget https://github.com/opencv/opencv/archive/4.1.0.zip -O opencv-4.1.0.zip
wget https://github.com/opencv/opencv_contrib/archive/4.1.0.zip -O opencv-contrib-4.1.0.zip
unzip opencv-4.1.0.zip
unzip opencv-contrib-4.1.0.zip
cd opencv-4.1.0/

mkdir release
cd release/
cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN="5.3" -D CUDA_ARCH_PTX="" -D  OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.1.0/modules -D  WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D  BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D  BUILD_EXAMPLES=OFF -D CMAKE_BUILD_TYPE=RELEASE -D  CMAKE_INSTALL_PREFIX=/usr/local ..
make -j3
sudo make install
sudo apt-get install -y python-opencv python3-opencv

sudo apt-get install -y libjpeg-dev 
pip3 install -y --user pillow

When the installation is complete. Run the code below to test the version of OpenCV you have.

python

import cv2
print cv2.getBuildInformation() 

Generate Test images
In the location where you want to store the following example code.
Create a new folder called “dataset”, but do not browse into it.
All three python scripts for generation, training, and recognition are to be saved in the same directory.
This code will take images of your face using the Raspberry pi camera. The results will be stored into the “dataset” folder. Then the next script will do training on it.
Run the following code

import cv2
 import os
 cam = cv2.VideoCapture('nvarguscamerasrc !  video/x-raw(memory:NVMM),width=1280, height=720, framerate=21/1,  format=NV12 ! nvvidconv flip-method=2 ! video/x-raw,width=960,  height=616 format=BGRx ! videoconvert ! appsink' , cv2.CAP_GSTREAMER)
 face_detector = cv2.CascadeClassifier('/usr/local/share/opencv4/haarcascades/haarcascade_frontalface_default.xml')
 # For each person, enter one numeric face id
 face_id = input('\n enter user id end press <return> ==>  ')
 print("\n [INFO] Initializing face capture. Look the camera and wait ...")
 # Initialize individual sampling face count
 count = 0
 while(True):
     ret, img = cam.read()
     img = cv2.flip(img, 1)
     gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
     faces = face_detector.detectMultiScale(gray, 1.3, 5)
     for (x,y,w,h) in faces:
         cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)    
         count += 1
         # Save the captured image into the datasets folder
         cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
         cv2.imshow('image', img)
     k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
     if k == 27:
         break
     elif count >= 30: # Take 30 face sample and stop video
          break
 # Do a bit of cleanup
 print("\n [INFO] Exiting Program and cleanup stuff")
 cam.release()
 cv2.destroyAllWindows()

Train Test images
This script will train the machine learning program to recognize your face.
Create a new folder called “trainer”, but do not browse into it.
Run the code below.

import cv2
 import numpy as np
 from PIL import Image
 import os
 # Path for face image database
 path = 'dataset'
 recognizer = cv2.face.LBPHFaceRecognizer_create()
 detector = cv2.CascadeClassifier("/usr/local/share/opencv4/haarcascades/haarcascade_frontalface_default.xml");
 # function to get the images and label data
 def getImagesAndLabels(path):
     imagePaths = [os.path.join(path,f) for f in os.listdir(path)]    
     faceSamples=[]
     ids = []
     for imagePath in imagePaths:
         PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
         img_numpy = np.array(PIL_img,'uint8')
         id = int(os.path.split(imagePath)[-1].split(".")[1])
         faces = detector.detectMultiScale(img_numpy)
         for (x,y,w,h) in faces:
             faceSamples.append(img_numpy[y:y+h,x:x+w])
             ids.append(id)
     return faceSamples,ids
 print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
 faces,ids = getImagesAndLabels(path)
 recognizer.train(faces, np.array(ids))
 # Save the model into trainer/trainer.yml
 recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi
 # Print the numer of faces trained and end program
 print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))

Running Facial Recognition
Now comes the good part. Your final file directory should look like this.

Run the following script in the base directory. The program should now recognize your face.

import time
 import sys
 import cv2
 import numpy as np
 import os


 print ("OpenCV "+cv2.__version__)
 recognizer = cv2.face.LBPHFaceRecognizer_create()
 recognizer.read('trainer/trainer.yml')
 cascadePath = "/usr/local/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
 faceCascade = cv2.CascadeClassifier(cascadePath);
 font = cv2.FONT_HERSHEY_SIMPLEX
 #id counter
 id = 0
 # names related to ids: example ==> yourname: id=1,  etc
 names = ['None', 'Botmation']
 cam = cv2.VideoCapture('nvarguscamerasrc !  video/x-raw(memory:NVMM),width=1280, height=720, framerate=21/1,  format=NV12 ! nvvidconv flip-method=2 ! video/x-raw,width=960,  height=616 format=BGRx ! videoconvert ! appsink' , cv2.CAP_GSTREAMER)
 # Define min window size to be recognized as a face
 minW = 0.1*cam.get(3)
 minH = 0.1*cam.get(4)
 ret, img=cam.read()
 while ret:
 while (delaytime < 0.5)&ret:
  ret, img=cam.read()
 #ret, img =cam.read()
 img = cv2.flip(img, 1) # Flip vertically
 gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
 faces = faceCascade.detectMultiScale(
  gray,
  scaleFactor = 1.2,
  minNeighbors = 5,
  minSize = (int(minW), int(minH)),
        )
 for(x,y,w,h) in faces:
  cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
  id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
  #Looks for a specific person
  # Check if confidence is less them 100 ==> "0" is perfect match
  if (confidence < 40):
   id = names[id]
   confidence = "  {0}%".format(round(100 - confidence))
     else:
   id = "unknown"
   confidence = "  {0}%".format(round(100 - confidence))
   cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
  cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)
   cv2.imshow('camera',img)

 k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
 if k == 27:
 break
 # Do a bit of cleanup
 print("\n [INFO] Exiting Program and cleanup stuff")
 cam.release()
 cv2.destroyAllWindows()

Additional resources and References
Facial Recognition on OpenCV
https://www.hackster.io/mjrobot/real-time-face-recognition-an-end-to-end-project-a10826

Jetson Nano quick start guide
https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit

The post OpenCV 4.1 Contrib Lib on Jetson Nano Quick Start Guide with Facial Recognition appeared first on Botmation's personal blog for creating robots and ai.

]]>
45