# Camera Access and Image Processing (6P.)
Before you start with the first task, make sure that the Pi Camera is corretly connected to your Raspberry Pi and you are connected via SSH. Navigate into the folder
WS2, where all necessary scripts for this assignment are located, and run reset.py.
## a)
Use the script test camera.py to check if your setup works. If this is the case,
an image should pop up, which is additionally stored as image.jpg on your Pi.
Copy that image into the box below.
Image: 
## b)
Provide two sample images in the box below and think about which color-space
conversion might be the most appropriate to reduce complexity when you want
to further process them.
COLOR_BGR2GRAY: If you need brightness (1 dimension) instead of colors (3 or 4 dimensions)

COLOR_BGR2HSV: https://stackoverflow.com/a/17063317

## c)
A conversion into HSV space is especially useful for color filtering. Try to
extract a colored object with the script **extract color.py**. By default the script
uses an image of a street, where you can e.g. extract the yellow lane, which
might be a helpful information for a self-driving car, but you are free to use
any arbitrary image. Provide the source image, the filtered image and the used
parameter settings.

# Face Detection
## a)
Try to run the code and find out how long one
execution takes on average (in ms).
```python
import io
import cv2
import numpy
import picamera
import logging
import SocketServer
import time
from threading import Condition
import BaseHTTPServer
import time
PAGE="""\
<html>
<head>
<title>Embedded & Pervasive Systems - Assignment 2 - Face Detection</title>
</head>
<body>
<h1>Embedded & Pervasive Systems - Face Detection</h1>
<img src="stream.mjpg" width="640" height="480" />
</body>
</html>
"""
class StreamingOutput(object):
def __init__(self):
self.frame = None
self.buffer = io.BytesIO()
self.condition = Condition()
cascPath = "haarcascade_frontalface_default.xml"
self.faceCascade = cv2.CascadeClassifier(cascPath)
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame, copy the existing buffer's content and notify all
# clients it's available
self.buffer.truncate()
with self.condition:
self.frame = self.buffer.getvalue()
### Here the actual Face Detection takes place ###
numpyBuf = numpy.fromstring(self.frame, dtype=numpy.uint8)
if numpyBuf.size > 0:
start_t = time.time()
# Convert the image to gray-scale
image = cv2.imdecode(numpyBuf, 1)
gray = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
# detect faces
faces = self.faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(50, 50),
#when the values are smaller, the face to detect can be smaller
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
end_t = time.time()
if len(faces):
print "Found " + str(len(faces)) + " face(s)"
print "Elapsed time {:.2f}".format(end_t-start_t)
# Draw rectangles around faces and encode image for the web stream
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(116, 244, 66),2)
self.frame = cv2.imencode('.jpg', image)[1].tostring()
###
self.condition.notify_all()
self.buffer.seek(0)
return self.buffer.write(buf)
class StreamingHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.send_response(301)
self.send_header('Location', '/index.html')
self.end_headers()
elif self.path == '/index.html':
content = PAGE.encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.send_header('Content-Length', len(content))
self.end_headers()
self.wfile.write(content)
elif self.path == '/stream.mjpg':
self.send_response(200)
self.send_header('Age', 0)
self.send_header('Cache-Control', 'no-cache, private')
self.send_header('Pragma', 'no-cache')
self.send_header('Content-Type', 'multipart/x-mixed-replace; boundary=FRAME')
self.end_headers()
try:
while True:
with output.condition:
output.condition.wait()
frame = output.frame
self.wfile.write(b'--FRAME\r\n')
self.send_header('Content-Type', 'image/jpeg')
self.send_header('Content-Length', len(frame))
self.end_headers()
self.wfile.write(frame)
self.wfile.write(b'\r\n')
except Exception as e:
logging.warning(
'Removed streaming client %s: %s',
self.client_address, str(e))
else:
self.send_error(404)
self.end_headers()
class StreamingServer(SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
allow_reuse_address = True
daemon_threads = True
with picamera.PiCamera(resolution='250x150', framerate=25) as camera:
output = StreamingOutput()
camera.start_recording(output, format='mjpeg')
try:
address = ('', 8000)
server = StreamingServer(address, StreamingHandler)
server.serve_forever()
finally:
camera.stop_recording()
```
```
Found 1 face(s)
Elapsed time 0.71
Found 1 face(s)
Elapsed time 0.71
Found 1 face(s)
Elapsed time 0.69
Found 1 face(s)
Elapsed time 0.68
Found 1 face(s)
Elapsed time 0.68
Found 1 face(s)
Elapsed time 0.69
Found 1 face(s)
Elapsed time 0.70
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.70
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.71
Found 1 face(s)
Elapsed time 0.71
Found 1 face(s)
Elapsed time 0.70
Found 1 face(s)
Elapsed time 0.71
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.71
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.72
Found 1 face(s)
Elapsed time 0.72
Found 2 face(s)
Elapsed time 0.71
Found 2 face(s)
Elapsed time 0.77
Found 2 face(s)
Elapsed time 0.72
Found 2 face(s)
Elapsed time 0.73
Found 1 face(s)
Elapsed time 0.73
Found 1 face(s)
Elapsed time 0.73
Found 2 face(s)
Elapsed time 0.73
Found 2 face(s)
Elapsed time 0.73
Found 1 face(s)
Elapsed time 0.73
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.75
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.73
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.77
Found 2 face(s)
Elapsed time 0.76
Found 1 face(s)
Elapsed time 0.69
Found 1 face(s)
Elapsed time 0.70
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.74
Found 1 face(s)
Elapsed time 0.75
Found 1 face(s)
Elapsed time 0.77
^CFound 1 face(s)
Elapsed time 0.75
```
We can see that the averag time for processing images is about 700ms.
## b)
Inspect the code and think about measures you can take to enable real-time
capability of the face detection. Maybe some of the preprocessing steps from
section 1 can help or are already implemented? Short textual ideas are enough.
Implement filtering :
- Grayscale filtering (this is already implemented)
- HSV filter that is adapted to the color of the faces that should be detected
- Decrease resolution
- Reduce frame rate






## c)
Now try to reduce the frame rate of the camera and the image resolution step
by step. How does this affect the execution time? Is there any noticeable
worsening of the detection accuracy because of that? Finally, write down the
parameter settings you use for ”real-time” face detection on your Pi and provide
one exemplary image with your detected face.
Lower framerate has no advantage on processing time, lowering resolution and higher framerate (20-30) almost realtime
250x150 with 25 fps is a good compromise between more or less realtime processing time and good accuracy









