In this article, we will learn how to install Face Recognition in Python on Windows. Recognize and manipulate faces from Python or from the command line with the world’s simplest face recognition library. Built using dlib’s state-of-the-art face recognition built with deep learning.
Installing Face Recognition on Windows :
Prerequisites:
Face Recognition module can only be installed for Python version 3.7 and 3.8.
Step 1: Install git for Windows
Step 2: Clone this repository and go inside the folder using the following commands
git clone https://github.com/RvTechiNNovate/face_recog_dlib_file.git cd face_recog_dlib_file
Step 3: Enter the following command to install dlib and cmake using pip
Python 3.7: pip install dlib-19.19.0-cp37-cp37m-win_amd64.whl Python 3.8: pip install dlib-19.19.0-cp38-cp38-win_amd64.whl
pip install cmake
Method 1: Using pip to install Face Recognition Package
Follow the below steps to install the Face Recognition package on Windows using pip:
Step 1: Install the latest Python3 in Windows
Step 2: Check if pip and python are correctly installed.
python --version pip --version
Step 3: Upgrade your pip to avoid errors during installation.
pip install --upgrade pip
Step 4: Enter the following command to install Face Recognition using pip3.
pip install face-recognition
Method 2: Using setup.py to install Face Recognition
Follow the below steps to install the Face Recognition on Windows using the setup.py file:
Step 1: Download the latest source package of Face Recognition for python3 from here.
curl https://files.pythonhosted.org/packages/6c/49/75dda409b94841f01cbbc34114c9b67ec618265084e4d12d37ab838f4fd3/face_recognition-1.3.0.tar.gz > face_recognition-1.3.0.tar.gz
Step 2: Extract the downloaded package using the following command.
tar -xzvf face_recognition-1.3.0.tar.gz
Step 3: Go inside the folder and Enter the following command to install the package.
cd face_recognition-1.3.0 python setup.py install
Verifying Face Recognition installation on Windows :
Make the following import in your python terminal to verify if the installation has been done properly:
import face_recognition
If there is any error while importing the module then is not installed properly.
Recognize and manipulate faces from Python or from the command line
with
the world’s simplest face recognition library.
Built using dlib’s state-of-the-art face
recognition
built with deep learning. The model has an accuracy of 99.38% on the
This also provides a simple face_recognition command line tool
that lets
you do face recognition on a folder of images from the command line!
Features
Find faces in pictures
Find all the faces that appear in a picture:
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
Find and manipulate facial features in pictures
Get the locations and outlines of each person’s eyes, nose, mouth and
chin.
import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
Finding facial features is super useful for lots of important stuff.
But you can also use for really stupid stuff
Identify faces in pictures
Recognize who appears in each photo.
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
You can even use this library with other Python libraries to do
real-time face recognition:
See this
example
for the code.
Installation
Requirements
-
Python 3.3+ or Python 2.7
-
macOS or Linux (Windows not officially supported, but might work)
Installing on Mac or Linux
First, make sure you have dlib already installed with Python bindings:
-
How to install dlib from source on macOS or
Ubuntu
Then, install this module from pypi using pip3 (or pip2 for
Python 2):
pip3 install face_recognition
If you are having trouble with installation, you can also try out a
Installing on Raspberry Pi 2+
-
Raspberry Pi 2+ installation
instructions
Installing on Windows
While Windows isn’t officially supported, helpful users have posted
instructions on how to install this library:
-
@masoudr’s Windows 10 installation guide (dlib +
face_recognition)
Installing a pre-configured Virtual Machine image
-
Download the pre-configured VM
image
(for VMware Player or VirtualBox).
Usage
Command-Line Interface
When you install face_recognition, you get a simple command-line
program
called face_recognition that you can use to recognize faces in a
photograph or folder full for photographs.
First, you need to provide a folder with one picture of each person
you
already know. There should be one image file for each person with the
files named according to who is in the picture:
Next, you need a second folder with the files you want to identify:
Then in you simply run the command face_recognition, passing in
the folder of known people and the folder (or single image) with
unknown
people and it tells you who is in each image:
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
There’s one line in the output for each face. The data is
comma-separated
with the filename and the name of the person found.
An unknown_person is a face in the image that didn’t match anyone
in
your folder of known people.
Adjusting Tolerance / Sensitivity
If you are getting multiple matches for the same person, it might be
that
the people in your photos look very similar and a lower tolerance
value
is needed to make face comparisons more strict.
You can do that with the —tolerance parameter. The default
tolerance
value is 0.6 and lower numbers make face comparisons more strict:
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
If you want to see the face distance calculated for each match in
order
to adjust the tolerance setting, you can use —show-distance true:
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
More Examples
If you simply want to know the names of the people in each photograph
but don’t
care about file names, you could do this:
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2
Barack Obama
unknown_person
Speeding up Face Recognition
Face recognition can be done in parallel if you have a computer with
multiple CPU cores. For example if your system has 4 CPU cores, you
can
process about 4 times as many images in the same amount of time by
using
all your CPU cores in parallel.
If you are using Python 3.4 or newer, pass in a
—cpus <number_of_cpu_cores_to_use> parameter:
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
You can also pass in —cpus -1 to use all CPU cores in your system.
Python Module
You can import the face_recognition module and then easily
manipulate
faces with just a couple of lines of code. It’s super easy!
API Docs:
https://face-recognition.readthedocs.io.
Automatically find all the faces in an image
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image)
# face_locations is now an array listing the co-ordinates of each face!
You can also opt-in to a somewhat more accurate deep-learning-based face
detection model.
Note: GPU acceleration (via nvidia’s CUDA library) is required for
good
performance with this model. You’ll also want to enable CUDA support
when compliling dlib.
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")
# face_locations is now an array listing the co-ordinates of each face!
If you have a lot of images and a GPU, you can also
Automatically locate the facial features of a person in an image
import face_recognition
image = face_recognition.load_image_file("my_picture.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
# face_landmarks_list is now an array with the locations of each facial feature in each face.
# face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
Recognize faces in images and identify who they are
import face_recognition
picture_of_me = face_recognition.load_image_file("me.jpg")
my_face_encoding = face_recognition.face_encodings(picture_of_me)[0]
# my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face!
unknown_picture = face_recognition.load_image_file("unknown.jpg")
unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0]
# Now we can see the two face encodings are of the same person with `compare_faces`!
results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding)
if results[0] == True:
print("It's a picture of me!")
else:
print("It's not a picture of me!")
Python Code Examples
All the examples are available
here.
Face Detection
-
Find faces in a
photograph -
Find faces in a photograph (using deep
learning) -
Find faces in batches of images w/ GPU (using deep
learning)
Facial Features
-
Identify specific facial features in a
photograph -
Apply (horribly ugly) digital
make-up
Facial Recognition
-
Find and recognize unknown faces in a photograph based on
photographs of known
people -
Compare faces by numeric face distance instead of only True/False
matches -
Recognize faces in live video using your webcam — Simple / Slower
Version (Requires OpenCV to be
installed) -
Recognize faces in live video using your webcam — Faster Version
(Requires OpenCV to be
installed) -
Recognize faces in a video file and write out new video file
(Requires OpenCV to be
installed) -
Recognize faces on a Raspberry Pi w/
camera -
Run a web service to recognize faces via HTTP (Requires Flask to be
installed) -
Recognize faces with a K-nearest neighbors
classifierHow Face Recognition Works
If you want to learn how face location and recognition work instead of
Caveats
-
The face recognition model is trained on adults and does not work
very well on children. It tends to mix
up children quite easy using the default comparison threshold of 0.6.
Deployment to Cloud Hosts (Heroku, AWS, etc)
Since face_recognition depends on dlib which is written in
C++, it can be tricky to deploy an app
using it to a cloud hosting provider like Heroku or AWS.
To make things easier, there’s an example Dockerfile in this repo that
shows how to run an app built with
face_recognition in a Docker
container. With that, you should be able to deploy
to any service that supports Docker images.
Common Issues
Issue: Illegal instruction (core dumped) when using
face_recognition or running examples.
Solution: dlib is compiled with SSE4 or AVX support, but your CPU
is too old and doesn’t support that.
Issue:
RuntimeError: Unsupported image type, must be 8bit gray or RGB image.
when running the webcam examples.
Solution: Your webcam probably isn’t set up correctly with OpenCV. Look
here for
more.
Issue: MemoryError when running pip2 install face_recognition
Solution: The face_recognition_models file is too big for your
available pip cache memory. Instead,
try pip2 —no-cache-dir install face_recognition to avoid the
issue.
Issue:
AttributeError: ‘module’ object has no attribute ‘face_recognition_model_v1’
Solution: The version of dlib you have installed is too old. You
need version 19.7 or newer. Upgrade dlib.
Issue:
Attribute Error: ‘Module’ object has no attribute ‘cnn_face_detection_model_v1’
Solution: The version of dlib you have installed is too old. You
need version 19.7 or newer. Upgrade dlib.
Issue: TypeError: imread() got an unexpected keyword argument ‘mode’
Solution: The version of scipy you have installed is too old. You
need version 0.17 or newer. Upgrade scipy.
Thanks
-
Many, many thanks to Davis King
(@nulhom)
for creating dlib and for providing the trained facial feature
detection and face encoding models
used in this library. For more information on the ResNet that powers
the face encodings, check out
his blog
post. -
Thanks to everyone who works on all the awesome Python data science
libraries like numpy, scipy, scikit-image,
pillow, etc, etc that makes this kind of stuff so easy and fun in
Python. -
Thanks to Cookiecutter
and the
audreyr/cookiecutter-pypackage
project template
for making Python project packaging way more tolerable.
History
1.2.3 (2018-08-21)
-
You can now pass model=”small” to face_landmarks() to use the 5-point face model instead of the 68-point model.
-
Now officially supporting Python 3.7
-
New example of using this library in a Jupyter Notebook
1.2.2 (2018-04-02)
-
Added the face_detection CLI command
-
Removed dependencies on scipy to make installation easier
-
Cleaned up KNN example and fixed a bug with drawing fonts to label detected faces in the demo
1.2.1 (2018-02-01)
-
Fixed version numbering inside of module code.
1.2.0 (2018-02-01)
-
Fixed a bug where batch size parameter didn’t work correctly when doing batch face detections on GPU.
-
Updated OpenCV examples to do proper BGR -> RGB conversion
-
Updated webcam examples to avoid common mistakes and reduce support questions
-
Added a KNN classification example
-
Added an example of automatically blurring faces in images or videos
-
Updated Dockerfile example to use dlib v19.9 which removes the boost dependency.
1.1.0 (2017-09-23)
-
Will use dlib’s 5-point face pose estimator when possible for speed (instead of 68-point face pose esimator)
-
dlib v19.7 is now the minimum required version
-
face_recognition_models v0.3.0 is now the minimum required version
1.0.0 (2017-08-29)
-
Added support for dlib’s CNN face detection model via model=”cnn” parameter on face detecion call
-
Added support for GPU batched face detections using dlib’s CNN face detector model
-
Added find_faces_in_picture_cnn.py to examples
-
Added find_faces_in_batches.py to examples
-
Added face_rec_from_video_file.py to examples
-
dlib v19.5 is now the minimum required version
-
face_recognition_models v0.2.0 is now the minimum required version
0.2.2 (2017-07-07)
-
Added –show-distance to cli
-
Fixed a bug where –tolerance was ignored in cli if testing a single image
-
Added benchmark.py to examples
0.2.1 (2017-07-03)
-
Added –tolerance to cli
0.2.0 (2017-06-03)
-
The CLI can now take advantage of multiple CPUs. Just pass in the -cpus X parameter where X is the number of CPUs to use.
-
Added face_distance.py example
-
Improved CLI tests to actually test the CLI functionality
-
Updated facerec_on_raspberry_pi.py to capture in rgb (not bgr) format.
0.1.14 (2017-04-22)
-
Fixed a ValueError crash when using the CLI on Python 2.7
0.1.13 (2017-04-20)
-
Raspberry Pi support.
0.1.12 (2017-04-13)
-
Fixed: Face landmarks wasn’t returning all chin points.
0.1.11 (2017-03-30)
-
Fixed a minor bug in the command-line interface.
0.1.10 (2017-03-21)
-
Minor pref improvements with face comparisons.
-
Test updates.
0.1.9 (2017-03-16)
-
Fix minimum scipy version required.
0.1.8 (2017-03-16)
-
Fix missing Pillow dependency.
0.1.7 (2017-03-13)
-
First working release.
Face Recognition
You can also read a translated version of this file in Chinese 简体中文版 or in Korean 한국어 or in Japanese 日本語.
Recognize and manipulate faces from Python or from the command line with
the world’s simplest face recognition library.
Built using dlib’s state-of-the-art face recognition
built with deep learning. The model has an accuracy of 99.38% on the
Labeled Faces in the Wild benchmark.
This also provides a simple face_recognition
command line tool that lets
you do face recognition on a folder of images from the command line!
Features
Find faces in pictures
Find all the faces that appear in a picture:
import face_recognition image = face_recognition.load_image_file("your_file.jpg") face_locations = face_recognition.face_locations(image)
Find and manipulate facial features in pictures
Get the locations and outlines of each person’s eyes, nose, mouth and chin.
import face_recognition image = face_recognition.load_image_file("your_file.jpg") face_landmarks_list = face_recognition.face_landmarks(image)
Finding facial features is super useful for lots of important stuff. But you can also use it for really stupid stuff
like applying digital make-up (think ‘Meitu’):
Identify faces in pictures
Recognize who appears in each photo.
import face_recognition known_image = face_recognition.load_image_file("biden.jpg") unknown_image = face_recognition.load_image_file("unknown.jpg") biden_encoding = face_recognition.face_encodings(known_image)[0] unknown_encoding = face_recognition.face_encodings(unknown_image)[0] results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
You can even use this library with other Python libraries to do real-time face recognition:
See this example for the code.
Online Demos
User-contributed shared Jupyter notebook demo (not officially supported):
Installation
Requirements
- Python 3.3+ or Python 2.7
- macOS or Linux (Windows not officially supported, but might work)
Installation Options:
Installing on Mac or Linux
First, make sure you have dlib already installed with Python bindings:
- How to install dlib from source on macOS or Ubuntu
Then, make sure you have cmake installed:
brew install cmake
Finally, install this module from pypi using pip3
(or pip2
for Python 2):
pip3 install face_recognition
Alternatively, you can try this library with Docker, see this section.
If you are having trouble with installation, you can also try out a
pre-configured VM.
Installing on an Nvidia Jetson Nano board
- Jetson Nano installation instructions
- Please follow the instructions in the article carefully. There is current a bug in the CUDA libraries on the Jetson Nano that will cause this library to fail silently if you don’t follow the instructions in the article to comment out a line in dlib and recompile it.
Installing on Raspberry Pi 2+
- Raspberry Pi 2+ installation instructions
Installing on FreeBSD
pkg install graphics/py-face_recognition
Installing on Windows
While Windows isn’t officially supported, helpful users have posted instructions on how to install this library:
- @masoudr’s Windows 10 installation guide (dlib + face_recognition)
Installing a pre-configured Virtual Machine image
- Download the pre-configured VM image (for VMware Player or VirtualBox).
Usage
Command-Line Interface
When you install face_recognition
, you get two simple command-line
programs:
face_recognition
— Recognize faces in a photograph or folder full for
photographs.face_detection
— Find faces in a photograph or folder full for photographs.
face_recognition
command line tool
The face_recognition
command lets you recognize faces in a photograph or
folder full for photographs.
First, you need to provide a folder with one picture of each person you
already know. There should be one image file for each person with the
files named according to who is in the picture:
Next, you need a second folder with the files you want to identify:
Then in you simply run the command face_recognition
, passing in
the folder of known people and the folder (or single image) with unknown
people and it tells you who is in each image:
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ /unknown_pictures/unknown.jpg,Barack Obama /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
There’s one line in the output for each face. The data is comma-separated
with the filename and the name of the person found.
An unknown_person
is a face in the image that didn’t match anyone in
your folder of known people.
face_detection
command line tool
The face_detection
command lets you find the location (pixel coordinatates)
of any faces in an image.
Just run the command face_detection
, passing in a folder of images
to check (or a single image):
$ face_detection ./folder_with_pictures/ examples/image1.jpg,65,215,169,112 examples/image2.jpg,62,394,211,244 examples/image2.jpg,95,941,244,792
It prints one line for each face that was detected. The coordinates
reported are the top, right, bottom and left coordinates of the face (in pixels).
Adjusting Tolerance / Sensitivity
If you are getting multiple matches for the same person, it might be that
the people in your photos look very similar and a lower tolerance value
is needed to make face comparisons more strict.
You can do that with the --tolerance
parameter. The default tolerance
value is 0.6 and lower numbers make face comparisons more strict:
$ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/ /unknown_pictures/unknown.jpg,Barack Obama /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person
If you want to see the face distance calculated for each match in order
to adjust the tolerance setting, you can use --show-distance true
:
$ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
/unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785
/face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None
More Examples
If you simply want to know the names of the people in each photograph but don’t
care about file names, you could do this:
$ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2 Barack Obama unknown_person
Speeding up Face Recognition
Face recognition can be done in parallel if you have a computer with
multiple CPU cores. For example, if your system has 4 CPU cores, you can
process about 4 times as many images in the same amount of time by using
all your CPU cores in parallel.
If you are using Python 3.4 or newer, pass in a --cpus <number_of_cpu_cores_to_use>
parameter:
$ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/
You can also pass in --cpus -1
to use all CPU cores in your system.
Python Module
You can import the face_recognition
module and then easily manipulate
faces with just a couple of lines of code. It’s super easy!
API Docs: https://face-recognition.readthedocs.io.
Automatically find all the faces in an image
import face_recognition image = face_recognition.load_image_file("my_picture.jpg") face_locations = face_recognition.face_locations(image) # face_locations is now an array listing the co-ordinates of each face!
See this example
to try it out.
You can also opt-in to a somewhat more accurate deep-learning-based face detection model.
Note: GPU acceleration (via NVidia’s CUDA library) is required for good
performance with this model. You’ll also want to enable CUDA support
when compliling dlib
.
import face_recognition image = face_recognition.load_image_file("my_picture.jpg") face_locations = face_recognition.face_locations(image, model="cnn") # face_locations is now an array listing the co-ordinates of each face!
See this example
to try it out.
If you have a lot of images and a GPU, you can also
find faces in batches.
Automatically locate the facial features of a person in an image
import face_recognition image = face_recognition.load_image_file("my_picture.jpg") face_landmarks_list = face_recognition.face_landmarks(image) # face_landmarks_list is now an array with the locations of each facial feature in each face. # face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye.
See this example
to try it out.
Recognize faces in images and identify who they are
import face_recognition picture_of_me = face_recognition.load_image_file("me.jpg") my_face_encoding = face_recognition.face_encodings(picture_of_me)[0] # my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face! unknown_picture = face_recognition.load_image_file("unknown.jpg") unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0] # Now we can see the two face encodings are of the same person with `compare_faces`! results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding) if results[0] == True: print("It's a picture of me!") else: print("It's not a picture of me!")
See this example
to try it out.
Python Code Examples
All the examples are available here.
Face Detection
- Find faces in a photograph
- Find faces in a photograph (using deep learning)
- Find faces in batches of images w/ GPU (using deep learning)
- Blur all the faces in a live video using your webcam (Requires OpenCV to be installed)
Facial Features
- Identify specific facial features in a photograph
- Apply (horribly ugly) digital make-up
Facial Recognition
- Find and recognize unknown faces in a photograph based on photographs of known people
- Identify and draw boxes around each person in a photo
- Compare faces by numeric face distance instead of only True/False matches
- Recognize faces in live video using your webcam — Simple / Slower Version (Requires OpenCV to be installed)
- Recognize faces in live video using your webcam — Faster Version (Requires OpenCV to be installed)
- Recognize faces in a video file and write out new video file (Requires OpenCV to be installed)
- Recognize faces on a Raspberry Pi w/ camera
- Run a web service to recognize faces via HTTP (Requires Flask to be installed)
- Recognize faces with a K-nearest neighbors classifier
- Train multiple images per person then recognize faces using a SVM
Creating a Standalone Executable
If you want to create a standalone executable that can run without the need to install python
or face_recognition
, you can use PyInstaller. However, it requires some custom configuration to work with this library. See this issue for how to do it.
Articles and Guides that cover face_recognition
- My article on how Face Recognition works: Modern Face Recognition with Deep Learning
- Covers the algorithms and how they generally work
- Face recognition with OpenCV, Python, and deep learning by Adrian Rosebrock
- Covers how to use face recognition in practice
- Raspberry Pi Face Recognition by Adrian Rosebrock
- Covers how to use this on a Raspberry Pi
- Face clustering with Python by Adrian Rosebrock
- Covers how to automatically cluster photos based on who appears in each photo using unsupervised learning
How Face Recognition Works
If you want to learn how face location and recognition work instead of
depending on a black box library, read my article.
Caveats
- The face recognition model is trained on adults and does not work very well on children. It tends to mix
up children quite easy using the default comparison threshold of 0.6. - Accuracy may vary between ethnic groups. Please see this wiki page for more details.
Deployment to Cloud Hosts (Heroku, AWS, etc)
Since face_recognition
depends on dlib
which is written in C++, it can be tricky to deploy an app
using it to a cloud hosting provider like Heroku or AWS.
To make things easier, there’s an example Dockerfile in this repo that shows how to run an app built with
face_recognition
in a Docker container. With that, you should be able to deploy
to any service that supports Docker images.
You can try the Docker image locally by running: docker-compose up --build
There are also several prebuilt Docker images.
Linux users with a GPU (drivers >= 384.81) and Nvidia-Docker installed can run the example on the GPU: Open the docker-compose.yml file and uncomment the dockerfile: Dockerfile.gpu
and runtime: nvidia
lines.
Having problems?
If you run into problems, please read the Common Errors section of the wiki before filing a github issue.
Thanks
- Many, many thanks to Davis King (@nulhom)
for creating dlib and for providing the trained facial feature detection and face encoding models
used in this library. For more information on the ResNet that powers the face encodings, check out
his blog post. - Thanks to everyone who works on all the awesome Python data science libraries like numpy, scipy, scikit-image,
pillow, etc, etc that makes this kind of stuff so easy and fun in Python. - Thanks to Cookiecutter and the
audreyr/cookiecutter-pypackage project template
for making Python project packaging way more tolerable.
Переводчик Елена Борноволокова специально для Нетологии адаптировала статью Файзана Шайха о том, как создать модель распознавания лиц и в каких сферах ее можно применять.
Введение
За последние годы компьютерное зрение набрало популярность и выделилось в отдельное направление. Разработчики создают новые приложения, которыми пользуются по всему миру.
В этом направлении меня привлекает концепция открытого исходного кода. Даже технологические гиганты готовы делиться новыми открытиями и инновациями со всеми, чтобы технологии не оставались привилегией богатых.
Одна из таких технологий — распознавание лиц. При правильном и этичном использовании эта технология может применяться во многих сферах жизни.
В этой статье я покажу вам, как создать эффективный алгоритм распознавания лиц, используя инструменты с открытым исходным кодом. Прежде чем перейти к этой информации, хочу, чтобы вы подготовились и испытали вдохновение, посмотрев это видео:
Распознавание лиц: потенциальные сферы применения
Приведу несколько потенциальных сфер применения технологии распознавания лиц.
Распознавание лиц в соцсетях. Facebook заменил присвоение тегов изображениям вручную на автоматически генерируемые предложения тегов для каждого изображения, загружаемого на платформу. Facebook использует простой алгоритм распознавания лиц для анализа пикселей на изображении и сравнения его с соответствующими пользователями.
Распознавание лиц в сфере безопасности. Простой пример использования технологии распознавания лиц для защиты личных данных — разблокировка смартфона «по лицу». Такую технологию можно внедрить и в пропускную систему: человек смотрит в камеру, а она определяет разрешить ему войти или нет.
Распознавание лиц для подсчета количества людей. Технологию распознавания лиц можно использовать при подсчете количества людей, посещающих какое-либо мероприятие (например, конференцию или концерт). Вместо того чтобы вручную подсчитывать участников, мы устанавливаем камеру, которая может захватывать изображения лиц участников и выдавать общее количество посетителей. Это поможет автоматизировать процесс и сэкономить время.
Настройка системы: требования к аппаратному и программному обеспечению
Рассмотрим, как мы можем использовать технологию распознавания лиц, обратившись к доступным нам инструментам с открытым исходным кодом.
Я использовал следующие инструменты, которые рекомендую вам:
- Веб-камера (Logitech C920) для построения модели распознавания лиц в реальном времени на ноутбуке Lenovo E470 ThinkPad (Core i5 7th Gen). Вы также можете использовать встроенную камеру своего ноутбука или видеокамеру с любой подходящей системой для анализа видео в режиме реального времени вместо тех, которые использовал я.
- Предпочтительно использовать графический процессор для более быстрой обработки видео.
- Мы использовали операционную систему Ubuntu 18.04 со всем необходимым ПО.
Прежде чем приступить к построению нашей модели распознавания лиц, разберем эти пункты более подробно.
Шаг 1: Настройка аппаратного обеспечения
Проверьте, правильно ли настроена камера. С Ubuntu это сделать просто: посмотрите, опознано ли устройство операционной системой. Для этого выполните следующие шаги:
- Прежде чем подключить веб-камеру к ноутбуку, проверьте все подключенные видео устройства, напечатав в командной строке
ls /dev/video*
. В результате выйдет список всех видео устройств, подключенных к системе. - Подключите веб-камеру и задайте команду снова. Если веб-камера подключена правильно, новое устройство будет отражено в результате выполнения команды.
- Также вы можете использовать ПО веб-камеры для проверки ее корректной работы. В Ubuntu для этого можно использовать программу «Сheese».
Шаг 2: Настройка программного обеспечения
Шаг 2.1: Установка Python
Код, указанный в данной статье, написан с использованием Python (версия 3.5). Для установки Python рекомендую использовать Anaconda – популярный дистрибутив Python для обработки и анализа данных.
Шаг 2.2: Установка OpenCV
OpenCV – библиотека с открытым кодом, которая предназначена для создания приложений компьютерного зрения. Установка OpenCV производится с помощью pip
:
pip3 install opencv-python
Шаг 2.3: Установите face_recognition API
Мы будем использовать face_recognition API
, который считается самым простым API для распознавания лиц на Python во всем мире. Для установки используйте:
pip install dlib
pip install face_recognition
Внедрение
После настройки системы переходим к внедрению. Для начала, мы создадим программу, а затем объясним, что сделали.
Пошаговое руководство
Создайте файл face_detector.py
и затем скопируйте приведенный ниже код:
# import libraries
import cv2
import face_recognition
# Get a reference to webcam
video_capture = cv2.VideoCapture("/dev/video1")
# Initialize variables
face_locations = []
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-1]
# Find all the faces in the current frame of video
face_locations = face_recognition.face_locations(rgb_frame)
# Display the results
for top, right, bottom, left in face_locations:
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()
Затем запустите этот файл Python, напечатав:
python face_detector.py
Если все работает правильно, откроется новое окно с запущенным режимом распознавания лиц в реальном времени.
Подведем итоги и объясним, что сделал наш код:
- Сначала мы указали аппаратное обеспечение, на котором будет производиться анализ видео.
- Далее сделали захват видео в реальном времени кадр за кадром.
- Затем обработали каждый кадр и извлекли местонахождение всех лиц на изображении.
- В итоге, воспроизвели эти кадры в форме видео вместе с указанием на то, где расположены лица.
Пример применения технологии распознавания лиц
На этом все самое интересное не заканчивается. Мы сделаем еще одну классную вещь: создадим полноценный пример применения на основе кода, приведенного выше. Внесем небольшие изменения в код, и все будет готово.
Предположим, что вы хотите создать автоматизированную систему с использованием видеокамеры для отслеживания, где спикер находится в данный момент времени. В зависимости от его положения, система поворачивает камеру так, что спикер всегда остается в центре кадра.
Первый шаг — создайте систему, которая идентифицирует человека или людей на видео и фокусируется на местонахождении спикера.
Разберем, как это сделать. В качестве примера я выбрал видео на YouTube с выступлением спикеров конференции «DataHack Summit 2017».
Сначала импортируем необходимые библиотеки:
import cv2
import face_recognition
Затем считываем видео и устанавливаем длину:
input_movie = cv2.VideoCapture("sample_video.mp4")
length = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT))
После этого создаем файл вывода с необходимым разрешением и скоростью передачи кадров, аналогичной той, что была в файле ввода.
Загружаем изображение спикера в качестве образца для распознания его на видео:
image = face_recognition.load_image_file("sample_image.jpeg")
face_encoding = face_recognition.face_encodings(image)[0]
known_faces = [
face_encoding,
]
Закончив, запускаем цикл, который будет:
- Извлекать кадр из видео.
- Находить все лица и идентифицировать их.
- Создавать новое видео, которое будет сочетать в себе оригинал кадра с указанием местонахождения лица спикера с подписью.
Посмотрим на код, который будет это выполнять:
# Initialize variables
face_locations = []
face_encodings = []
face_names = []
frame_number = 0
while True:
# Grab a single frame of video
ret, frame = input_movie.read()
frame_number += 1
# Quit when the input video file ends
if not ret:
break
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-1]
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_frame, model="cnn")
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
match = face_recognition.compare_faces(known_faces, face_encoding, tolerance=0.50)
name = None
if match[0]:
name = "Phani Srikant"
face_names.append(name)
# Label the results
for (top, right, bottom, left), name in zip(face_locations, face_names):
if not name:
continue
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1)
# Write the resulting image to the output video file
print("Writing frame {} / {}".format(frame_number, length))
output_movie.write(frame)
# All done!
input_movie.release()
cv2.destroyAllWindows()
Код даст вам вот такой результат:
От редакции
Курсы «Нетологии» по теме:
- онлайн-профессия «Python-разработчик»
- онлайн-профессия «Data Scientist»
In this tutorial, I explain the setup and usage of the Python face_recognition library. This library can be used to detect faces using Python and identify facial features.
- The Goal
- Installing The «face_recognition» Library
- Prerequisites (Windows)
- CMake
- Visual Studio C++ Build Tools
- Installing face_recognition and Verifying The Installation
- Installing PIL
- Prerequisites (Windows)
- Detecting A Face In An Image
- Identifying The Detected Face
- Cropping Out The Detected Face
- Final Code For This Section
- Detecting Multiple Faces In An Image
- Identifying The Detected Faces
- Cropping Out The Detected Faces
- Final Code For This Section
- Identifying Facial Features
- Final Code For This Section
- Matching Detected Faces
- Getting Face Encodings
- Checking For Matches
- Final Code For This Section
- Additional PIL Help
The Goal
In this tutorial, I’ll go over some example usages of the Python face_recognition library to:
- Detect faces in images
- Detect facial features on a detected face (like eyebrows and nose)
- Check for matches of detected faces
All images and code snippets are provided on this post along with step-by-step instructions and explanations as to what is going on. This tutorial is aimed towards Windows 10 but Linux and macOS users will most likely find this easier as they can skip some of the prerequisites.
Here are some relevant links for face_recognition if you need/want them:
- Documentation: face-recognition.readthedocs.io
- Source code: github.com/ageitgey/face_recognition
- PyPI: pypi.org/project/face-recognition
Installing The «face_recognition» Library
Prerequisites (Windows)
To install the face_recognition library on Windows you will need the following installed:
- CMake
- Visual Studio C++ build tools
If you do not have these, you will get errors like,
CMake must be installed to build the following extensions: _dlib_pybind11
Which is telling you that CMake isn’t installed, or,
You must use Visual Studio to build a python extension on windows. If you
are getting this error it means you have not installed Visual C++. Note
that there are many flavours of Visual Studio, like Visual Studio for C#
development. You need to install Visual Studio for C++.
Which is telling you that you need Visual Studio C++ build tools.
CMake
To install CMake, go to cmake.org/download/ and download the appropriate installer for your machine. I am using 64bit Windows 10 so I will get cmake-<version>-win64-x64.msi
. After downloading the setup file, install it.
While installing CMake, add CMake to the system PATH environment variable for all users or the current user so it can be found easily.
After the installation is complete, open a terminal and execute cmake
. This should show the usage for CMake. If it did not, make sure you selected the option to add it to the PATH environment variable.
You will need to close and re-open your terminal/application for the PATH variable to update so the
cmake
binary can be identified.
Visual Studio C++ Build Tools
Unlike Linux, C++ compilers for Windows are not included by default in the OS. If we visit the WindowsCompilers page on the Python wiki we can see there is information on getting a standalone version of Visual C++ 14.2 compiler without the need for Visual Studio. If we visit the link from that wiki section, we will be brought to a Microsoft download page. On this download page, you will want to download «Build Tools for Visual Studio 2019».
When vs_buildtools__<some other stuff>.exe
has downloaded, run the exe and allow it to install a few things before we get to the screen below. When you get to this screen, make the selections I have:
After clicking «Install», wait for the installation to complete and restart.
Now that CMake and the required build tools are installed, we can then continue to installing the face_recognition library.
Installing face_recognition and Verifying The Installation
To install the face_recognition library, execute the following in a terminal:
python -m pip install face-recognition
This might take a bit longer to install than usual as dlib needs to be built using the tools installed from above. To validate that the library was installed successfully, try to import the library in Python using the following:
No errors should be raised.
Installing PIL
For this tutorial, we will also be using Pillow / PIL which will help us crop and draw on images. This is not required to use the face_recognition library but will be required in this tutorial to prove/show results. To install it, execute the following:
python -m pip install Pillow
To validate the library was installed, try to import PIL in Python using the following:
No errors should be raised.
Detecting A Face In An Image
Now that we have set the face_recognition library and PIL up, we can now start detecting faces. To start off, we will detect a face in an image with just one person.
single-person.jpg
First, we want to import face_recognition and some helpers from PIL.
import face_recognition
from PIL import Image, ImageDraw
Now load your image using face_recognition.load_image_file
.
image = face_recognition.load_image_file('single-person.jpg')
image
now contains our image in a format that face_recognition can detect faces with. To identify the location of the face in this image, call face_recognition.face_locations
and pass the image.
face_locations = face_recognition.face_locations(image)
face_locations
will now contain a list of face locations. Each face location is a tuple of pixel positions for (top, right, bottom, left)
— we need to remember this for when we use it later.
Since there is only one face in this image, we would expect there to be only one item in this list. To check how many faces were detected, we can get the length of the list.
amount = len(face_locations)
print(f'There are {amount} face locations')
For the example image I have provided above and am using for this tutorial, this has told me there was one face detected.
To get the location of this face to use later, we can then just get the first element out of the list.
first_face_location = face_locations[0]
To see what’s in this, you can call:
print(first_face_location)
Which will print out something like:
These are the pixel positions for (top, right, bottom, left)
which we will use to create a box and crop with soon.
Identifying The Detected Face
To identify where the face is that was detected in the image, we will draw a red box on the bounds that were returned by face_recognition.face_locations
.
First, we need to create a PIL image from the image that was loaded using face_recognition.load_image_file
. Doing this will allow us to use features offered by PIL.
img = Image.fromarray(image, 'RGB')
Now that we have the PIL image, we need to create an object to help us draw on the image. Before we do this, we will also copy the image into a new object so that when we crop the face out later there won’t be a red box still around it.
img_with_red_box = img.copy()
img_with_red_box_draw = ImageDraw.Draw(img_with_red_box)
Now that we have an object to help us draw on the image, we will draw a rectangle using the dimensions returned earlier.
To draw a box, we need two points, the top left and the bottom right as x and y coordinates. Since we got back (top, right, bottom, left)
, we need to make these (left, top), (right, bottom)
; the basic translation can be seen below.
img_with_red_box_draw.rectangle(
[
(first_face_location[3], first_face_location[0]),
(first_face_location[1], first_face_location[2])
],
outline="red",
width=3
)
We needed
(left, top), (right, bottom)
to get (x, y) (x, y) points.
In that step we have also set outline="red"
to make the box red and width=3
to make the box 3 pixels wide.
To see the final result, we call:
This will open the image in the default image viewer. The image should look like this:
Cropping Out The Detected Face
Aside from drawing a box, we can also crop out the face into another image.
Using the original image that we didn’t draw on (because we drew on the copied image), we can call img.crop
providing the dimensions from before.
img_cropped = img.crop((
first_face_location[3], # Left x
first_face_location[0], # Top y
first_face_location[1], # Right x
first_face_location[2] # Bottom y
))
img.crop
returns a copy of the original image being copped so you do not need to copy before-hand if you want to do something else with the original image.
img_cropped
now contains a new cropped image, to display it, we can call .show()
again.
Final Code For This Section
import face_recognition
from PIL import Image, ImageDraw
# Detecting the faces
image = face_recognition.load_image_file('single-person.jpg') # Load the image
face_locations = face_recognition.face_locations(image) # Detect the face locations
first_face_location = face_locations[0] # Get the first face
# Convert the face_recognition image to a PIL image
img = Image.fromarray(image, 'RGB')
# Creating the image with red box
img_with_red_box = img.copy() # Create a copy of the original image so there is not red box in the cropped image later
img_with_red_box_draw = ImageDraw.Draw(img_with_red_box) # Create an image to draw with
img_with_red_box_draw.rectangle( # Draw the rectangle on the image
[
(first_face_location[3], first_face_location[0]), # (left, top)
(first_face_location[1], first_face_location[2]) # (right, bottom)
],
outline="red", # Make the box red
width=3 # Make the box 3px in thickness
)
img_with_red_box.show() # Open the image in the default image viewer
# Creating the cropped image
img_cropped = img.crop(( # Crop the original image
first_face_location[3],
first_face_location[0],
first_face_location[1],
first_face_location[2]
))
img_cropped.show() # Open the image in the default image viewer
Detecting Multiple Faces In An Image
We saw before that face_recognition.face_locations
returns an array of tuples corresponding with the locations of faces. This means we can use the same methods above, but loop over the result of face_recognition.face_locations
when drawing and cropping.
I will use the following as my image. It has 5 faces visible, 2 of which are slightly blurred.
group-of-people.jpg
Once again, we want to import face_recognition and some helpers from PIL.
import face_recognition
from PIL import Image, ImageDraw
Then load the new image using face_recognition.load_image_file
and detect the faces using the same methods as before.
image = face_recognition.load_image_file('group-of-people.jpg')
face_locations = face_recognition.face_locations(image)
If we print out face_locations (print(face_locations)
), we can see that 5 faces have been detected.
[
(511, 1096, 666, 941),
(526, 368, 655, 239)
(283, 1262, 390, 1154),
(168, 1744, 297, 1615),
(271, 390, 378, 282)
]
Since we now have more than one face, taking the first one does not make sense — we should loop over them all and perform our operations in the loop.
Before we continue, we should also create the PIL image we will be working with.
img = Image.fromarray(image, 'RGB')
Identifying The Detected Faces
Like before, we need to copy the original image for later (optional) and create an object to help us draw.
img_with_red_box = img.copy()
img_with_red_box_draw = ImageDraw.Draw(img_with_red_box)
Now we can loop all the faces and create rectangles.
for face_location in face_locations:
img_with_red_box_draw.rectangle(
[
(face_location[3], face_location[0]),
(face_location[1], face_location[2])
],
outline="red",
width=3
)
And once again, look at the image:
Not bad eh!
Cropping Out The Detected Faces
Just like drawing many boxes, we can also crop all the detected faces. Use a for-loop again, crop the image in each loop and then showing the image.
for face_location in face_locations:
img_cropped = img.crop((face_location[3], face_location[0], face_location[1], face_location[2]))
img_cropped.show()
You will have many images open on your machine in separate windows; here they are all together:
I have downscaled these images for the tutorial but yours will be cropped at whatever resolution you put in.
Final Code For This Section
import face_recognition
from PIL import Image, ImageDraw
# Load image and detect faces
image = face_recognition.load_image_file("group-of-people.jpg")
face_locations = face_recognition.face_locations(image)
# Create the PIL image to copy and crop
img = Image.fromarray(image, 'RGB')
img_with_red_box = img.copy() # Make a single copy for all the red boxes
img_with_red_box_draw = ImageDraw.Draw(img_with_red_box) # Get our drawing object again
for face_location in face_locations: # Loop over all the faces detected this time
img_with_red_box_draw.rectangle( # Draw a rectangle for the current face
[
(face_location[3], face_location[0]),
(face_location[1], face_location[2])
],
outline="red",
width=3
)
img_with_red_box.show() # Open the image in the default image viewer
for face_location in face_locations: # Loop over all the faces detected
img_cropped = img.crop(( # Crop the current image like we did last time
face_location[3],
face_location[0],
face_location[1],
face_location[2]
))
img_cropped.show() # Show the image for the current iteration
Identifying Facial Features
face_recognition also has a function face_recognition.face_landmarks
which works like face_recognition.face_locations
but will return a list of dictionaries containing face feature positions rather than the positions of the detected faces themselves.
Going back to the image with one person in it, we can import everything again, load the image and call face_recognition.face_landmarks
.
import face_recognition
from PIL import Image, ImageDraw
image = face_recognition.load_image_file('single-person.jpg')
face_landmarks_list = face_recognition.face_landmarks(image) # The new call
Now if we print face_landmarks_list
, the object will look a bit different.
[
{
'chin': [(315, 223), (318, 248), (321, 273), (326, 296), (335, 319), (350, 339), (370, 354), (392, 365), (415, 367), (436, 363), (455, 351), (469, 336), (479, 318), (486, 296), (488, 273), (490, 251), (489, 229)],
'left_eyebrow': [(329, 194), (341, 183), (358, 180), (375, 182), (391, 189)],
'right_eyebrow': [(434, 189), (448, 184), (461, 182), (474, 184), (483, 194)],
'nose_bridge': [(411, 209), (411, 223), (412, 238), (412, 253)],
'nose_tip': [(394, 269), (403, 272), (412, 275), (421, 272), (428, 269)],
'left_eye': [(349, 215), (360, 208), (373, 207), (384, 216), (372, 218), (359, 219)],
'right_eye': [(436, 216), (446, 208), (458, 208), (467, 216), (459, 219), (447, 219)],
'top_lip': [(374, 309), (388, 300), (402, 296), (411, 298), (420, 296), (434, 301), (448, 308), (442, 308), (420, 307), (411, 308), (402, 307), (380, 309)],
'bottom_lip': [(448, 308), (434, 317), (421, 321), (411, 322), (401, 321), (388, 317), (374, 309), (380, 309), (402, 309), (411, 310), (421, 309), (442, 308)]
}
]
There is quite a bit of stuff here. For each facial feature (i.e. chin, left eyebrow, right eyebrow, etc), a corresponding list contains x and y tuples — these x and y points are x and y coordinates on the image.
PIL offers a .line()
method on the drawing object we have been uses which takes a list of x and y points which is perfect for this situation. To start we will need a drawing object.
img = Image.fromarray(image, 'RGB') # Make a PIL image from the loaded image
img_draw = ImageDraw.Draw(img) # Create the draw object
In this example we will not copy the image as this is the only time we will be using it
Now that we have the object to help us draw, plot all these lines using the first dictionary in the list returned above.
face_landmarks = face_landmarks_list[0] # Get the first dictionary of features
img_draw.line(face_landmarks['chin'])
img_draw.line(face_landmarks['left_eyebrow'])
img_draw.line(face_landmarks['right_eyebrow'])
img_draw.line(face_landmarks['nose_bridge'])
img_draw.line(face_landmarks['nose_tip'])
img_draw.line(face_landmarks['left_eye'])
img_draw.line(face_landmarks['right_eye'])
img_draw.line(face_landmarks['top_lip'])
img_draw.line(face_landmarks['bottom_lip'])
Now we can call .show()
on our image to look at it:
Final Code For This Section
import face_recognition
from PIL import Image, ImageDraw
# Load the image and detect face landmarks for each face within
image = face_recognition.load_image_file('single-person.jpg')
face_landmarks_list = face_recognition.face_landmarks(image)
# Make a PIL image from the loaded image and then get a drawing object
img = Image.fromarray(image, 'RGB')
img_draw = ImageDraw.Draw(img)
# Draw all the features for the first face
face_landmarks = face_landmarks_list[0] # Get the first object corresponding to the first face
img_draw.line(face_landmarks['chin'])
img_draw.line(face_landmarks['left_eyebrow'])
img_draw.line(face_landmarks['right_eyebrow'])
img_draw.line(face_landmarks['nose_bridge'])
img_draw.line(face_landmarks['nose_tip'])
img_draw.line(face_landmarks['left_eye'])
img_draw.line(face_landmarks['right_eye'])
img_draw.line(face_landmarks['top_lip'])
img_draw.line(face_landmarks['bottom_lip'])
img_with_face_landmarks.show() # Show the image for the current iteration
Matching Detected Faces
The face_recognition library also provides the function face_recognition.compare_faces
which can be used to compare detected faces to see if they match.
There are two arguments for this function which we will use:
known_face_encodings
: A list of known face encodingsface_encoding_to_check
: A single face encoding to compare against the list
For this section, we will be getting 2 face encodings for the same person and checking if they are in another image.
Here are the two images with a known face in each:
elon-musk-1.jpg
elon-musk-2.png
And here are the images we will check to see if Elon is in them:
elon-musk-in-group.jpg
group-of-people.jpg
Getting Face Encodings
To get known face encodings, we can use face_recognition.face_encodings
. This function takes an image that contains a face and the locations of faces in images to use.
Typically you would only use one location of a face in a single image to create one encoding but if you have multiple of the same face in one image you can provide more than one location.
The process of what we need to do to get our known face encodings is:
- Load in the first image
- Detect faces in the image to get the face locations
- Verify there is only one face and select the first face
- Call
face_recognition.face_encodings
with the image and the one face location - Repeat 1 through 5 for the second image
We have done steps 1-3 previously, so we can do it here again:
import face_recognition
# Load image and detect faces
image = face_recognition.load_image_file("elon-musk-1.jpg")
face_locations = face_recognition.face_locations(image)
And to validate one face was detected:
print(len(face_locations)) # Should be 1
Now we can call face_recognition.face_encodings
and provide the image and found location.
face_location = face_locations[0] # We only want an encoding for the first face. There may be more than one face in images you use so I am leaving this here as a note.
face_encodings = face_recognition.face_encodings(image, [face_location])
The parameter
known_face_locations
is optional and face_recognition will detect all the faces in the image automatically when getting face encodings ifknown_face_locations
is not supplied. For this part, I am demonstrating it this way to validate there is only one detected face in the image.
Since we specified an array of known_face_locations
, we know that there will be only one encoding returned so we can take the first.
elon_musk_knwon_face_encoding_1 = face_encodings[0]
We can now repeat this process for the other image
image = face_recognition.load_image_file("elon-musk-2.png") # Load the image
face_locations = face_recognition.face_locations(image) # Get face locations
face_location = face_locations[0] # Only use the first detected face
face_encodings = face_recognition.face_encodings(image, [face_location]) # Get all face encodings
elon_musk_knwon_face_encoding_2 = face_encodings[0] # Pull out the one returned face encoding
Now that we have elon_musk_knwon_face_encoding_1
and elon_musk_knwon_face_encoding_2
, we can see if they exist in our other two images.
Checking For Matches
To check for matches in an image with face_recognition.compare_faces
, we need known face encodings (which we have just gotten above) and a single face encoding to check.
Since we are working with groups of images, we will have to loop the detected faces — although this will also work the same with only one person in the image.
First, we need to get all face decodings out of our first image to compare. I had noted above that the second parameter, known_face_locations
, to face_recognition.face_encodings
was optional, leaving it out will detect all faces in the image automatically and return face encodings for every face in the image; this is exactly what we want and will remove the intermediate step of detecting faces.
image = face_recognition.load_image_file("elon-musk-in-group.jpg") # Load the image we are comparing
unknwon_face_encodings = face_recognition.face_encodings(image) # Get face encodings for everyone in the image
Now that we have our unknown face encodings for all the faces in the group image, we can loop over them and check if either match:
for unknwon_face_encoding in unknwon_face_encodings:
matches = face_recognition.compare_faces(
[elon_musk_knwon_face_encoding_1, elon_musk_knwon_face_encoding_2], # The known face encodings (can be only 1 - less is faster)
unknwon_face_encoding # The single unknown face encoding
)
print(matches)
If you run this, you will see something like:
[True, True]
[False, False]
[False, False]
[False, False]
Each line is one comparison (stored in matches
) and each boolean corresponds to a known face encoding and whether it matched the unknown face encoding.
From the values above, we can see that the first unknown face in the group image matched both known face encodings and the other three unknown faces didn’t match either encoding. Using more than once face encoding can allow you to calculate a better probability but will cost you in speed.
If you run this on the other image, everything returns false.
Final Code For This Section
import face_recognition
# Load elon-musk-1.jpg and detect faces
image = face_recognition.load_image_file("elon-musk-1.jpg")
face_locations = face_recognition.face_locations(image)
# Get the single face encoding out of elon-musk-1.jpg
face_location = face_locations[0] # Only use the first detected face
face_encodings = face_recognition.face_encodings(image, [face_location])
elon_musk_knwon_face_encoding_1 = face_encodings[0] # Pull out the one returned face encoding
# Load elon-musk-2.jpg and detect faces
image = face_recognition.load_image_file("elon-musk-2.png")
face_locations = face_recognition.face_locations(image)
# Get the single face encoding out of elon-musk-2.jpg
face_location = face_locations[0]
face_encodings = face_recognition.face_encodings(image, [face_location])
elon_musk_knwon_face_encoding_2 = face_encodings[0]
# Load the image with unknown to compare
image = face_recognition.load_image_file("elon-musk-in-group.jpg") # Load the image we are comparing
unknwon_face_encodings = face_recognition.face_encodings(image)
# Loop over each unknwon face encoding to see if the face matches either known encodings
print('Matches for elon-musk-in-group.jpg')
for unknwon_face_encoding in unknwon_face_encodings:
matches = face_recognition.compare_faces(
[elon_musk_knwon_face_encoding_1, elon_musk_knwon_face_encoding_2], # The known face encodings (can be only 1 - less is faster)
unknwon_face_encoding # The single unknown face encoding
)
print(matches)
# Load the other image with unknown to compare
image = face_recognition.load_image_file("group-of-people.jpg") # Load the image we are comparing
unknwon_face_encodings = face_recognition.face_encodings(image)
# Loop over each unknwon face encoding to see if the face matches either known encodings
print('Matches for group-of-people.jpg')
for unknwon_face_encoding in unknwon_face_encodings:
matches = face_recognition.compare_faces(
[elon_musk_knwon_face_encoding_1, elon_musk_knwon_face_encoding_2],
unknwon_face_encoding
)
print(matches)
Additional PIL Help
This tutorial doesn’t focus on PIL but one function you may find useful is img.save()
; this saves a file. An example of its usages is img.save('my_image.png')
to save a PIL image to my_image.png.
You can find more on PIL in it’s docs and there is plenty of other help online as it is a very old library.
В этой статье мы разберемся, что такое распознавание лиц и чем оно отличается от определения лиц на изображении. Мы кратко рассмотрим теорию распознавания лиц, а затем перейдем к написанию кода. В конце этой статьи вы сможете создать свою собственную программу распознавания лиц на изображениях, а также в прямом эфире с веб-камеры.
Содержание
- Обнаружение лиц
- Распознавание лиц
- Что такое OpenCV?
- Распознавание лиц с использованием Python
- Извлечение признаков лица
- Распознавание лиц во время прямой трансляции веб-камеры
- Распознавание лиц на изображениях
Что такое обнаружение лиц?
Одной из основных задач компьютерного зрения является автоматическое обнаружение объекта без вмешательства человека. Например, определение человеческих лиц на изображении.
Лица людей отличаются друг от друга. Но в целом можно сказать, что всем им присущи определенные общие черты.
Существует много алгоритмов обнаружения лиц. Одним из старейших является алгоритм Виолы-Джонса. Он был предложен в 2001 году и применяется по сей день. Чуть позже мы тоже им воспользуемся. После прочтения данной статьи вы можете изучить его более подробно.
Обнаружение лиц обычно является первым шагом для решения более сложных задач, таких как распознавание лиц или верификация пользователя по лицу. Но оно может иметь и другие полезные применения.
Вероятно самым успешным использованием обнаружения лиц является фотосъемка. Когда вы фотографируете своих друзей, встроенный в вашу цифровую камеру алгоритм распознавания лиц определяет, где находятся их лица, и соответствующим образом регулирует фокус.
Что такое распознавание лиц?
Итак, в создании алгоритмов обнаружения лиц мы (люди) преуспели. А можно ли также распознавать, чьи это лица?
Распознавание лиц — это метод идентификации или подтверждения личности человека по его лицу. Существуют различные алгоритмы распознавания лиц, но их точность может различаться. Здесь мы собираемся описать распознавание лиц при помощи глубокого обучения.
Итак, давайте разберемся, как мы распознаем лица при помощи глубокого обучения. Для начала мы производим преобразование, или, иными словами, эмбеддинг (embedding), изображения лица в числовой вектор. Это также называется глубоким метрическим обучением.
Для облегчения понимания давайте разобьем весь процесс на три простых шага:
Обнаружение лиц
Наша первая задача — это обнаружение лиц на изображении или в видеопотоке. Далее, когда мы знаем точное местоположение или координаты лица, мы берем это лицо для дальнейшей обработки.
Извлечение признаков
Вырезав лицо из изображения, мы должны извлечь из него характерные черты. Для этого мы будем использовать процедуру под названием эмбеддинг.
Нейронная сеть принимает на вход изображение, а на выходе возвращает числовой вектор, характеризующий основные признаки данного лица. (Более подробно об этом рассказано, например, в нашей серии статей про сверточные нейронные сети — прим. переводчика). В машинном обучении данный вектор как раз и называется эмбеддингом.
Теперь давайте разберемся, как это помогает в распознавании лиц разных людей.
Во время обучения нейронная сеть учится выдавать близкие векторы для лиц, которые выглядят похожими друг на друга.
Например, если у вас есть несколько изображений вашего лица в разные моменты времени, то естественно, что некоторые черты лица могут меняться, но все же незначительно. Таким образом, векторы этих изображений будут очень близки в векторном пространстве. Чтобы получить общее представление об этом, взгляните на график:
Чтобы определять лица одного и того же человека, сеть будет учиться выводить векторы, находящиеся рядом в векторном пространстве. После обучения эти векторы трансформируются следующим образом:
Здесь мы не будем заниматься обучением подобной сети. Это требует значительных вычислительных мощностей и большого объема размеченных данных. Вместо этого мы используем уже предобученную Дэвисом Кингом нейронную сеть. Она обучалась приблизительно на 3000000 изображений. Эта сеть выдает вектор длиной 128 чисел, который и определяет основные черты лица.
Познакомившись с принципами работы подобных сетей, давайте посмотрим, как мы будем использовать такую сеть для наших собственных данных.
Мы передадим все наши изображения в эту предобученную сеть, получим соответствующие вектора (эмбеддинги) и затем сохраним их в файл для следующего шага.
[machinelearning_ad_block]
Сравнение лиц
Теперь, когда у нас есть вектор (эмбеддинг) для каждого лица из нашей базы данных, нам нужно научиться распознавать лица из новых изображений. Таким образом, нам нужно, как и раньше, вычислить вектор для нового лица, а затем сравнить его с уже имеющимися векторами. Мы сможем распознать лицо, если оно похоже на одно из лиц, уже имеющихся в нашей базе данных. Это означает, что их вектора будут расположены вблизи друг от друга, как показано на примере ниже:
Итак, мы передали в сеть две фотографии, одна Владимира Путина, другая Джорджа Буша. Для изображений Буша у нас были вектора (эмбеддинги), а для Путина ничего не было. Таким образом, когда мы сравнили эмбеддинг нового изображения Буша, он был близок с уже имеющимися векторам,и и мы распознали его. А вот изображений Путина в нашей базе не было, поэтому распознать его не удалось.
В области искусственного интеллекта задачи компьютерного зрения — одни из самых интересных и сложных.
Компьютерное зрение работает как мост между компьютерным программным обеспечением и визуальной картиной вокруг нас. Оно дает ПО возможность понимать и изучать все видимое в окружающей среде.
Например, на основе цвета, размера и формы плода мы определяем разновидность определенного фрукта. Эта задача может быть очень проста для человеческого разума, однако в контексте компьютерного зрения все выглядит иначе.
Сначала мы собираем данные, затем выполняем определенные действия по их обработке, а потом многократно обучаем модель, как ей распознавать сорт фрукта по размеру, форме и цвету его плода.
В настоящее время существуют различные пакеты для выполнения задач машинного обучения, глубокого обучения и компьютерного зрения. И безусловно, модуль, отвечающий за компьютерное зрение, проработан лучше других.
OpenCV — это библиотека с открытым программным кодом. Она поддерживает различные языки программирования, например R и Python. Работать она может на многих платформах, в частности — на Windows, Linux и MacOS.
Основные преимущества OpenCV
:
- имеет открытый программный код и абсолютно бесплатна
- написана на C/C++ и в сравнении с другими библиотеками работает быстрее
- не требует много памяти и хорошо работает при небольшом объеме RAM
- поддерживает большинство операционных систем, в том числе Windows, Linux и MacOS.
Установка
Здесь мы будем рассматривать установку OpenCV только для Python. Мы можем установить ее при помощи менеджеров pip
или conda
(в случае, если у нас установлен пакет Anaconda).
1. При помощи pip
При помощи pip
процесс установки может быть выполнен с использованием следующей команды:
pip install opencv-python
2. Anaconda
Если вы используете Anaconda, то выполните следующую команду в окружении Anaconda:
conda install -c conda-forge opencv
Распознавание лиц с использованием Python
В этой части мы реализуем распознавание лиц при помощи Python и OpenCV. Для начала посмотрим, какие библиотеки нам потребуются и как их установить:
- OpenCV
- dlib
- Face_recognition
OpenCV
— это библиотека обработки изображений и видео, которая используется для их анализа. Ее применяют для обнаружения лиц, считывания номерных знаков, редактирования фотографий, расширенного роботизированного зрения, оптического распознавания символов и многого другого.
Библиотека dlib
, поддерживая Дэвисом Кингом, содержит реализацию глубокого метрического обучения. Мы ее будем использовать для конструирования векторов (эмбеддингов) изображений, играющих ключевую роль в процессе распознавания лиц.
Библиотека face_recognition
, созданная Адамом Гейтгеем, включает в себя функции распознавания лиц dlib
и является по сути надстройкой над ней. С ней очень легко работать, и мы будем ее использовать в нашем коде. Имейте ввиду, что ее нужно устанавливать после библиотеки dlib
.
Для установки OpenCV
наберите в командной строке:
pip install opencv-python
Мы перепробовали множество способов установки dlib под WIndows и простейший способ это сделать — при помощи Anaconda. Поэтому для начала установите Anaconda (вот здесь подробно рассказано, как это делается). Затем введите в терминале следующую команду:
conda install -c conda-forge dlib
Далее, для установки библиотеки face_recognition
наберите в командной строке следующее:
pip install face_recognition
Теперь, когда все необходимые модули установлены, приступим к написанию кода. Нам нужно будет создать три файла.
Первый файл будет принимать датасет с изображениями и выдавать эмбеддинг для каждого лица. Эти эмбеддинги будут записываться во второй файл. В третьем файле мы будем сравнивать лица с уже существующими изображениями. А затем мы сделаем тоже самое в стриме с веб-камеры.
Извлечение признаков лица
Для начала вам нужно достать датасет с лицами или создать свой собственный. Главное, убедитесь, что все изображения находятся в папках, причем в каждой папке должны быть фотографии одного и того же человека.
Затем разместите датасет в вашей рабочей директории, то есть там, где выбудете создавать собственные файлы.
А вот сам код:
from imutils import paths import face_recognition import pickle import cv2 import os # в директории Images хранятся папки со всеми изображениями imagePaths = list(paths.list_images('Images')) knownEncodings = [] knownNames = [] # перебираем все папки с изображениями for (i, imagePath) in enumerate(imagePaths): # извлекаем имя человека из названия папки name = imagePath.split(os.path.sep)[-2] # загружаем изображение и конвертируем его из BGR (OpenCV ordering) # в dlib ordering (RGB) image = cv2.imread(imagePath) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) #используем библиотеку Face_recognition для обнаружения лиц boxes = face_recognition.face_locations(rgb,model='hog') # вычисляем эмбеддинги для каждого лица encodings = face_recognition.face_encodings(rgb, boxes) # loop over the encodings for encoding in encodings: knownEncodings.append(encoding) knownNames.append(name) # сохраним эмбеддинги вместе с их именами в формате словаря data = {"encodings": knownEncodings, "names": knownNames} # для сохранения данных в файл используем метод pickle f = open("face_enc", "wb") f.write(pickle.dumps(data)) f.close()
Сейчас мы сохранили все эмбеддинги в файл под названием face_enc
. Теперь мы можем их использовать для распознавания лиц на изображениях или во время видеострима с веб-камеры.
Распознавание лиц во время прямой трансляции веб-камеры
Вот код для распознавания лиц из прямой трансляции веб-камеры:
import face_recognition import imutils import pickle import time import cv2 import os # find path of xml file containing haarcascade file cascPathface = os.path.dirname( cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml" # load the harcaascade in the cascade classifier faceCascade = cv2.CascadeClassifier(cascPathface) # load the known faces and embeddings saved in last file data = pickle.loads(open('face_enc', "rb").read()) print("Streaming started") video_capture = cv2.VideoCapture(0) # loop over frames from the video file stream while True: # grab the frame from the threaded video stream ret, frame = video_capture.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(60, 60), flags=cv2.CASCADE_SCALE_IMAGE) # convert the input frame from BGR to RGB rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # the facial embeddings for face in input encodings = face_recognition.face_encodings(rgb) names = [] # loop over the facial embeddings incase # we have multiple embeddings for multiple fcaes for encoding in encodings: # Compare encodings with encodings in data["encodings"] # Matches contain array with boolean values and True for the embeddings it matches closely # and False for rest matches = face_recognition.compare_faces(data["encodings"], encoding) # set name =inknown if no encoding matches name = "Unknown" # check to see if we have found a match if True in matches: #Find positions at which we get True and store them matchedIdxs = [i for (i, b) in enumerate(matches) if b] counts = {} # loop over the matched indexes and maintain a count for # each recognized face face for i in matchedIdxs: # Check the names at respective indexes we stored in matchedIdxs name = data["names"][i] # increase count for the name we got counts[name] = counts.get(name, 0) + 1 # set name which has highest count name = max(counts, key=counts.get) # update the list of names names.append(name) # loop over the recognized faces for ((x, y, w, h), name) in zip(faces, names): # rescale the face coordinates # draw the predicted face name on the image cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) cv2.putText(frame, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 2) cv2.imshow("Frame", frame) if cv2.waitKey(1) & 0xFF == ord('q'): break video_capture.release() cv2.destroyAllWindows()
В данном примере для обнаружения лиц использовался метод cv2.CascadeClassifier()
из библиотеки OpenCV. Но вы с таким же успехом можете пользоваться и методом face_recognition.face_locations()
, как мы уже делали в предыдущем примере.
Распознавание лиц на изображениях
Код для обнаружения и распознавания лиц на изображениях почти аналогичен тому, что вы видели выше. Убедитесь в этом сами:
import face_recognition import imutils import pickle import time import cv2 import os # find path of xml file containing haarcascade file cascPathface = os.path.dirname( cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml" # load the harcaascade in the cascade classifier faceCascade = cv2.CascadeClassifier(cascPathface) # load the known faces and embeddings saved in last file data = pickle.loads(open('face_enc', "rb").read()) # Find path to the image you want to detect face and pass it here image = cv2.imread(Path-to-img) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # convert image to Greyscale for haarcascade gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(60, 60), flags=cv2.CASCADE_SCALE_IMAGE) # the facial embeddings for face in input encodings = face_recognition.face_encodings(rgb) names = [] # loop over the facial embeddings incase # we have multiple embeddings for multiple fcaes for encoding in encodings: # Compare encodings with encodings in data["encodings"] # Matches contain array with boolean values and True for the embeddings it matches closely # and False for rest matches = face_recognition.compare_faces(data["encodings"], encoding) # set name =inknown if no encoding matches name = "Unknown" # check to see if we have found a match if True in matches: # Find positions at which we get True and store them matchedIdxs = [i for (i, b) in enumerate(matches) if b] counts = {} # loop over the matched indexes and maintain a count for # each recognized face face for i in matchedIdxs: # Check the names at respective indexes we stored in matchedIdxs name = data["names"][i] # increase count for the name we got counts[name] = counts.get(name, 0) + 1 # set name which has highest count name = max(counts, key=counts.get) # update the list of names names.append(name) # loop over the recognized faces for ((x, y, w, h), name) in zip(faces, names): # rescale the face coordinates # draw the predicted face name on the image cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) cv2.putText(image, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 2) cv2.imshow("Frame", image) cv2.waitKey(0)
Результат:
На этом наша статья подошла к концу. Мы надеемся, что вы получили общее представление о задачах распознавания лиц и способах их решения.
Перевод статьи «Face Recognition with Python and OpenCV»
В этой статье мы узнаем, как установить распознавание лиц в Python в Windows. Распознавайте лица и управляйте ими из Python или из командной строки с помощью самой простой в мире библиотеки распознавания лиц. Создан с использованием современного распознавания лиц dlib, созданного с помощью глубокого обучения.
Предпосылки:
Модуль распознавания лиц можно установить только для версий Python 3.7 и 3.8.
Шаг 1. Установите git для Windows
Шаг 2: Клонируйте этот репозиторий и зайдите в папку, используя следующие команды.
git clone https://github.com/RvTechiNNovate/face_recog_dlib_file.git cd face_recog_dlib_file
Шаг 3: Введите следующую команду, чтобы установить dlib и cmake с помощью pip
Python 3.7: pip install dlib-19.19.0-cp37-cp37m-win_amd64.whl Python 3.8: pip install dlib-19.19.0-cp38-cp38-win_amd64.whl
pip install cmake
Способ 1: использование pip для установки пакета распознавания лиц
Выполните следующие шаги, чтобы установить пакет распознавания лиц в Windows с помощью pip:
Шаг 1. Установите последнюю версию Python3 в Windows.
Шаг 2: Проверьте правильность установки pip и python.
python --version pip --version
Шаг 3: Обновите ваш pip, чтобы избежать ошибок во время установки.
pip install --upgrade pip
Шаг 4: Введите следующую команду, чтобы установить распознавание лиц с помощью pip3.
pip install face-recognition
Способ 2: использование setup.py для установки распознавания лиц
Выполните следующие шаги, чтобы установить распознавание лиц в Windows с помощью файла setup.py:
Шаг 1: Загрузите последний исходный пакет распознавания лиц для python3 отсюда.
curl https://files.pythonhosted.org/packages/6c/49/75dda409b94841f01cbbc34114c9b67ec618265084e4d12d37ab838f4fd3/face_recognition-1.3.0.tar.gz > face_recognition-1.3.0.tar.gz
Шаг 2: Извлеките загруженный пакет с помощью следующей команды.
tar -xzvf face_recognition-1.3.0.tar.gz
Шаг 3: Войдите в папку и введите следующую команду, чтобы установить пакет.
cd face_recognition-1.3.0 python setup.py install
Проверка установки распознавания лиц в Windows:
Сделайте следующий импорт в вашем терминале Python, чтобы убедиться, что установка была выполнена правильно:
import face_recognition
Если при импорте модуля возникает какая-либо ошибка, значит, он не установлен должным образом.
i have installed the cmake but still dlib is not installing which is required for the installation of face_recognition module
the below mentioned error i am getting whenever i try to install the dlib by using the pip install dlib
ERROR: Complete output from command 'c:userssunilappdatalocalprogramspythonpython37python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'rn'"'"', '"'"'n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:UserssunilAppDataLocalTemppip-wheel-2fd_0qt9' --python-tag cp37:
ERROR: running bdist_wheel
running build
running build_py
package init file 'dlib__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
Invoking CMake setup: 'cmake C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibtoolspython -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibbuildlib.win-amd64-3.7 -DPYTHON_EXECUTABLE=c:userssunilappdatalocalprogramspythonpython37python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibbuildlib.win-amd64-3.7 -A x64'
-- Building for: NMake Makefiles
CMake Error in CMakeLists.txt:
Generator
NMake Makefiles
does not support platform specification, but platform
x64
was specified.
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "C:/Users/sunil/AppData/Local/Temp/pip-install-oufh_gcl/dlib/build/temp.win-amd64-3.7/Release/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibsetup.py", line 261, in <module>
'Topic :: Software Development',
File "c:userssunilappdatalocalprogramspythonpython37libsite-packagessetuptools__init__.py", line 129, in setup
return distutils.core.setup(**attrs)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscore.py", line 148, in setup
dist.run_commands()
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 966, in run_commands
self.run_command(cmd)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 985, in run_command
cmd_obj.run()
File "c:userssunilappdatalocalprogramspythonpython37libsite-packageswheelbdist_wheel.py", line 192, in run
self.run_command('build')
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 985, in run_command
cmd_obj.run()
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscommandbuild.py", line 135, in run
self.run_command(cmd_name)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 985, in run_command
cmd_obj.run()
File "C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibsetup.py", line 135, in run
self.build_extension(ext)
File "C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibsetup.py", line 172, in build_extension
subprocess.check_call(cmake_setup, cwd=build_folder)
File "c:userssunilappdatalocalprogramspythonpython37libsubprocess.py", line 328, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\tools\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7', '-DPYTHON_EXECUTABLE=c:\users\sunil\appdata\local\programs\python\python37\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7', '-A', 'x64']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... error
ERROR: Complete output from command 'c:userssunilappdatalocalprogramspythonpython37python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'rn'"'"', '"'"'n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:UserssunilAppDataLocalTemppip-record-89jcoq15install-record.txt' --single-version-externally-managed --compile:
ERROR: running install
running build
running build_py
package init file 'dlib__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
Invoking CMake setup: 'cmake C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibtoolspython -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibbuildlib.win-amd64-3.7 -DPYTHON_EXECUTABLE=c:userssunilappdatalocalprogramspythonpython37python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibbuildlib.win-amd64-3.7 -A x64'
-- Building for: NMake Makefiles
CMake Error in CMakeLists.txt:
Generator
NMake Makefiles
does not support platform specification, but platform
x64
was specified.
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "C:/Users/sunil/AppData/Local/Temp/pip-install-oufh_gcl/dlib/build/temp.win-amd64-3.7/Release/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibsetup.py", line 261, in <module>
'Topic :: Software Development',
File "c:userssunilappdatalocalprogramspythonpython37libsite-packagessetuptools__init__.py", line 129, in setup
return distutils.core.setup(**attrs)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscore.py", line 148, in setup
dist.run_commands()
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 966, in run_commands
self.run_command(cmd)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 985, in run_command
cmd_obj.run()
File "c:userssunilappdatalocalprogramspythonpython37libsite-packagessetuptoolscommandinstall.py", line 61, in run
return orig.install.run(self)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscommandinstall.py", line 545, in run
self.run_command('build')
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 985, in run_command
cmd_obj.run()
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscommandbuild.py", line 135, in run
self.run_command(cmd_name)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilscmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:userssunilappdatalocalprogramspythonpython37libdistutilsdist.py", line 985, in run_command
cmd_obj.run()
File "C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibsetup.py", line 135, in run
self.build_extension(ext)
File "C:UserssunilAppDataLocalTemppip-install-oufh_gcldlibsetup.py", line 172, in build_extension
subprocess.check_call(cmake_setup, cwd=build_folder)
File "c:userssunilappdatalocalprogramspythonpython37libsubprocess.py", line 328, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\tools\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7', '-DPYTHON_EXECUTABLE=c:\users\sunil\appdata\local\programs\python\python37\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7', '-A', 'x64']' returned non-zero exit status 1.
----------------------------------------
ERROR: Command "'c:userssunilappdatalocalprogramspythonpython37python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'rn'"'"', '"'"'n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:UserssunilAppDataLocalTemppip-record-89jcoq15install-record.txt' --single-version-externally-managed --compile" failed with error code 1 in C:UserssunilAppDataLocalTemppip-install-oufh_gcldlib
can anyone tell me the easiest way to install the face_recognition module for my windows 10