# 1236. Web Crawler ###### tags: `leetcode`,`bucket`,`medium` >ref: https://leetcode.com/problems/web-crawler/ > ![](https://i.imgur.com/OqLHazZ.png) ![](https://i.imgur.com/LpkPQGZ.png) ![](https://i.imgur.com/Gai9WN3.png) ![](https://i.imgur.com/JOIAy67.png) >1. tc O(n) n for all url >2. sc O(n) for queue ```java= /** * // This is the HtmlParser's API interface. * // You should not implement it, or speculate about its implementation * interface HtmlParser { * public List<String> getUrls(String url) {} * } */ class Solution { public List<String> crawl(String startUrl, HtmlParser htmlParser) { String ori= startUrl.split("/")[2]; Queue<String> q= new LinkedList<>(); q.offer(startUrl); Set<String> visit= new HashSet<>(); visit.add(startUrl); while(!q.isEmpty()){ String cur= q.poll(); for(String str:htmlParser.getUrls(cur)){ if(!visit.contains(str) && ori.equals(str.split("/")[2])){ q.offer(str); visit.add(str); } } } return new ArrayList<>(visit); } } ```