Robots
Where do robots find what pages are on a website? Hint:
What does disallow tell a robot?
robots.txt
A robots.txt file tells search engine crawlers which URLs the crawler can access on your site.
User-agent: *  
Disallow: /70r3hnanldfspufdsoifnlds.html
The User-agent: * means this section applies to all robots.
The Disallow: /70r3hnanldfspufdsoifnlds.html tells the robot that it should that page.
Let's see why we are not allowed to visit /70r3hnanldfspufdsoifnlds.html.
Flag
CTFlearn{r0b0ts_4r3_th3_futur3}