---
title: AperiCTF 2019 - [Prog] Émoji 🥳 (50 points)
author: Maltemo
tags: CTF, AperiCTF, Prog, Python, Request, BeatifulSoup
---
AperiCTF 2019 - [Prog] Émoji 🥳 (50 points)
===
Written by [Maltemo](https://twitter.com/Maltemo), member of team [SinHack](https://sinhack.blog/)
[TOC]
___
## Statement of the challenge
### Description
Comptez le nombre d’émoji demandé par le serveur. Vous avez 3 secondes.
Count the number of emojis asked by the server.
You have 3 seconds.
### Website
https://emoji.aperictf.fr/
## Analysis
The aim was clear, we had to count the number of emojis given in the label tags.
There is no way to make it in time manually, so let's code a script that will do the job for us.
### Librairies used
I'll be using python langage because there are 2 great libraries :
* [Request](https://www.pythonforbeginners.com/requests/using-requests-in-python)
* [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
In order to install those, I used pip :
```shell
pip install requests
pip install beautifulsoup4
```
Then in our script, we import the librairies :
```python=
import requests
from bs4 import BeautifulSoup
```
### Getting html from the website with requests
In order to make the server understand that we are the same user during all the requests, we are going to create a session with requests.
Then we'll use this session for every requests we make to the web-server.
The object returned after
```python=
#Creating the session
session = requests.session()
#GET request on "https://emoji.aperictf.fr/"
response = session.get("https://emoji.aperictf.fr/")
#Prints to screen the html that has been requested
print response.text
```
### Parsing the html file with BeautifulSoup
The first step is to parse the html with BeautifulSoup. Then we get an object that will help us to find elements tags needed.
```python=
soup = BeautifulSoup(response.text,features="lxml")
```
The next step is to get every emoji that we have to search in the page from the label tags.
Soup will help me to get every label from the page :
```python=
for label in soup.find_all('label'):
#splits the string : "Nombre de 🐙:" in ["Nombre","de","🐙:"]
emoji_text = label.text.split(" ")[2]
#remove the ':' after the emoji
emoji_text = emoji_text[:-1]
#We add to the emojis list of search :
#the param to give the answer for this emoji,
#the emoji text and a counter set at 0
emojis.append([label['for'],emoji_text,0])
```
Next step is to search through the main div containing all the emojis, and increment our counters every time we get a match :
```python=
#for every letter of the text in the main div (letters are emojis here)
for letter in soup.find('div',{'style':'font-size:14px;'}).text:
#for every emoji in the search list :
for emoji in emojis:
#if we have a match
if letter == emoji[1]:
#increment the counter of the emoji
emoji[2] += 1
```
Last step consists in sending back the answer to the server via POST request on our session.
The first parameter is the URL, and the second one is the data ({input_name : counter})
```python=
response_solved = session.post("https://emoji.aperictf.fr/",{emojis[0][0] : emojis[0][2],emojis[1][0] : emojis[1][2]})
print response_solved.text
```
## Solution
Here is the final code :
```python=
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
session = requests.session()
response = session.get("https://emoji.aperictf.fr/")
emojis = []
soup = BeautifulSoup(response.text,features="lxml")
print soup.prettify()
for label in soup.find_all('label'):
emoji_text = label.text.split(" ")[2]
emoji_text = emoji_text[:-1]
emojis.append([label['for'],emoji_text,0])
for letter in soup.find('div',{'style':'font-size:14px;'}).text:
for emoji in emojis:
if letter == emoji[1]:
emoji[2] += 1
response_solved = session.post("https://emoji.aperictf.fr/",{emojis[0][0] : emojis[0][2],emojis[1][0] : emojis[1][2]})
print response_solved.text
```
### TL;DR
Automation and parsing of a html page through a program to answer a question in 3 seconds.
### Flag
The flag is **APRK{Aimemoj1}**
___
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.