-
Notifications
You must be signed in to change notification settings - Fork 634
Add health check for urls #148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
thanks @huangsam this is super useful! looks like there may be a bug though because the URLs that end in .html do not resolve correctly in this program. Any ideas there? For example, on the Flask page it says "http://blog.startifact.com/posts/older/what-is-pythonic" is a 404 but the actual URL is "http://blog.startifact.com/posts/older/what-is-pythonic.html" which resolves fine. |
|
Thanks for pointing an example case out. The regular expression is good at detecting URLs but it's not perfect at capturing all of it. Separate parsing for Markdown and HTML might be necessary to better capture the URLs in their entirety. As for the core logic of verifying a URL, that works just fine. |
check_urls.py
Outdated
| urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) | ||
|
|
||
| # Configurable variables | ||
| URL_MATCH = 'https?:\/\/[a-zA-Z0-9\.\-]+(html|\/)[=a-zA-Z0-9\_\/\?\&\-]+' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last portion of the regex [=a-zA-Z0-9\_\/\?\&\-]+ should be [=a-zA-Z0-9\_\/\?\&\.\-]+ since it missed the . thereby ignoring websites that ended with .html.
|
The output has been reduced significantly down to the following: |
- Add url collection algorithm - Optimize regex + config for clarity - Handle exceptions in get_url_status
0cdd819 to
b56bb5e
Compare
|
Timeout errors are now showing up with |
|
I'm happy to merge this now because it's super helpful. I guess the only other bit is that it's picking up non-URLs like "https://c6c6d4e8.ngrok.io", which are embedded in the code but don't actually link to sites. It's not a huge deal but if you want to improve the script that'd be a big improvement. |
|
Updated change log with a shout out for the new health check script. Thanks again @huangsam! |
|
Thank for the reference @mattmakai! I do understand that non-URLs are being picked up. Not a fault of the actual regex, but more because of the context of the content surrounding the URLs. As a workaround, I created this line to ignore some obvious URLs. To provide an "authentic" solution, I imagine that the one-line Bash command I invoke at the start of |
Here is the output that comes from the
check_urls.pyscript: urlout.txtThis solution uses ThreadPoolExecutor to resolve the inherent I/O bottleneck of URL requests. Also uses a fairly comprehensive regular expression for matching URLs. The pattern can be tweaked in the future if needed.