Interactive faces

Face and text to speech 

This is an example of Processing to learn:

-To use Text to Speech library (ttslib)

-To draw shapes capable to follow mouse position

-To introduce an image using PImage and insert into desired screen coordinates

-To control the frame rate

-To assign randomly or according to Perlin noise diferent face colours.

-To execute something when a key is pressed.

-To change mouth and eye shape using a conditional statement face

[p5js code canvas]

float r, g, b, R, G, B;

float mouthHeight=10;

float eyeHeight=20;

void setup() {

size (800, 600);

background (127);

}

void draw() {

background(255);

mouthHeight+=2;

if (mouthHeight>50) {

mouthHeight=10;

}

eyeHeight=eyeHeight-2;

if (eyeHeight<0) {

eyeHeight=20;

}

frameRate(5);

r= random(127,255);

g= random(127,255);

b= random(127,255);

R= random(126);

G= random(126);

B= random(126);

fill(r, g, b);

ellipse(mouseX, mouseY, 250, 350);

fill (R, G, B);

rectMode(CENTER);

rect(mouseX-50, mouseY-100, 40, eyeHeight);

rect(mouseX+50, mouseY-100, 40, 20);

ellipse(mouseX, mouseY+130, 60, mouthHeight);

if (mouseY < height/2) {

println(“I am on top!”);

}

else {

println(“I am at the bottom!”);

}

if (mouseX < width/2) {

println(“I am on the left hand!”);

}

else {

println(“I am on the right hand!”);

}

}

void keyPressed() {

if (key == CODED) {

if (keyCode == UP) {

mouseX=width/2; mouseY=height/2-100;

println( “GOING UP ARROW”);

}

else if (keyCode == DOWN) {

mouseX=width/2; mouseY = height/2+100;

println(“GOING DOWN ARROW”);

}

}

else {

}

}

———————————————

//with TEXT TO SPEECH

import guru.ttslib.*;

PImage logo;

TTS tts;

float t, r, g, b, R,G, B;

float surprise=10;

void setup(){

logo= loadImage(“logo.png”);

size (800,600);

background (127);

tts=new TTS();

}

void draw(){

image(logo, 0, 0);

surprise+=2;

if (surprise>50){

surprise=10;

}

frameRate(5);

r= random(255);

g= random(255);

b= random(255);

R= map(noise(t),0,1,0,255);

G= map(noise(t+1),0,1,0,255);

B= map(noise(t+2),0,1,0,255);

t+=1;

fill(r,g,b);

ellipse(mouseX, mouseY, 250, 350);

fill (R,G,B);rectMode(CENTER);

rect(mouseX-50, mouseY-100,40,20);

rect(mouseX+50,mouseY-100,40,20);

ellipse(mouseX,mouseY+150,100,surprise);

}

void keyPressed(){

tts.speak(“I am an human being”);

println(“I am human”);

}

Sad or happy face

Press the mouse to change your feelings

void setup() {

size(400, 400);

}

void draw() {

background(255);

fill(120);

ellipse(mouseX, mouseY, 200, 200);

strokeWeight(6);

arc(mouseX-40, mouseY-30, 50, 50, QUARTER_PI, PI-QUARTER_PI);

arc(mouseX+40, mouseY-30, 50, 50, QUARTER_PI, PI-QUARTER_PI);

arc(mouseX+10, mouseY+65, 90, 90,PI, PI+HALF_PI);

 

if(mousePressed == true){

fill(255, 255, 0);

}

if(mousePressed == true){

background(168, 234, 233);

ellipse(mouseX, mouseY, 200, 200);

fill(255);

ellipse(mouseX-40, mouseY-10,40, 60);

ellipse(mouseX+40, mouseY-10, 40, 60);

fill(0);

ellipse(mouseX-40, mouseY-10, 20, 30);

ellipse(mouseX+40, mouseY-10, 20, 30);

fill(255, 255, 0);

arc(mouseX-1, mouseY+20, 100, 100, QUARTER_PI, PI-QUARTER_PI);

println(“mouseX:” + mouseX + “, mouseY:” + mouseY);

}

}

 

Microphone and face interaction.
Look for a working example at pompeu.neocities.org/face

var eyeSize;
var eyeHeight=20;
/*var means variable, by default is zero. The names are chosen by me*/
var input;
var analyzer;
var eyeColor;
//input and analyzer are names to work with sounds
function setup() {
//these are the settings
createCanvas(1800,1200);
// Canvas is the area to draw (width,height) in pixels
// Create an Audio input
mic = new p5.AudioIn();

// start the Audio Input.
// By default, it does not .connect() (to the computer speakers)
mic.start();
}

function draw() {
frameRate(5);
eyeColor=random(255);
eyeSize=random(20,50);
//random value from zero to 255
background(39,47,239)
//this is a blue background
var vol = mic.getLevel();
var h = map(vol, 0, 1, 20, 80);
if (mouseIsPressed) {
//mouseX=300,mouseY=200
fill(255,0,0); //Red for the face
ellipse (mouseX,mouseY,200,200); //Face
fill(0,255,0);
ellipse(mouseX-30,mouseY-50,30,eyeHeight); // Right eye
fill(eyeColor,127-eyeColor,0);
ellipse(mouseX+30,mouseY-50,30,20); //Left eye
fill(0,0,255);
ellipse(mouseX,mouseY+40,100,h); //Mouth
} else {
//mouseX=300,mouseY=200
fill(230,60-h*4,182);
ellipse (mouseX,mouseY,200+h,200+h);
fill(0,255,0);
ellipse(mouseX-30,mouseY-50,30,eyeHeight);
ellipse(mouseX+30,mouseY-50,eyeSize,eyeSize);
fill(0,0,255);
arc(mouseX,mouseY+40,100,h,0,PI); //Mouth
}
eyeHeight=eyeHeight-15;
if (eyeHeight<0) {
eyeHeight=20;
}
}
Faces and pollution data

Chernoff faces are used by computer programmers to present online data or to show your feelings. You will used it to connect to air pollution data and change the face features according to the pollution level.

You will be using p5.js library and API REST using openweathermap and other data centers.

Look at my example available at pompeu.neocities.org/airpollution
Find how to create a Javascript function to draw emojis and how to change their arguments or parameters. It is also shown a map function to change properties (e.g. background color) depending on the mouse position.
Change the emoji function to a Chernoff face function taking into account the parameters used to define Chernoff faces.

// Code for Chernoff faces
var c1,c2;
function setup(){
createCanvas (1980, 1080);
}
function emoji(x,y,w,h,eyeSize, mouthSize, a1,a2,ec1,ec2,ec3,mc1,mc2,mc3){
noStroke();
ellipse (x,y,w,h);//face
fill (ec1,ec2,ec3);//eye rgb colour
ellipse (x-w/5, y-h/5,eyeSize,eyeSize);//eye
ellipse (x+w/5, y-h/5,eyeSize,eyeSize);
fill (mc1,mc2,mc3);//mouth rgb colour
arc(x,y+h/4,mouthSize*3, mouthSize,a1,a2,CHORD);//mouth a=angle
}
function draw(){
background(c1,c2,c1);
fill (255,255,0); //yellow
emoji(200,200,150,150,25,25,0,PI,0,0,255,255,0,0);
fill(0,255,255);
emoji(400,200,150,150,25,25,PI,0,255,0,255,40,100,0);
c1=map(mouseX,0,width,0,255);
c2=map(mouseY, 0,height,255,0)
}