-- Leo's gemini proxy
-- Connecting to geminispace.info:1965...
-- Sending request
-- Meta line: 20 text/gemini
Thanks to the contribution of Hannu Hartikainen geminispace.info now is again able to honor the user-agents "gus", "indexer" and "*" in robots.txt.
The revamped data store seems to work fine so far.
Unfortunately i had to disable the "newest hosts" and "newest pages" sites as the data is currently not available. I'll add that back again later, but before this i'd like to have the cleanup mechanismn implemented to get rid of old data from capsules that are no longer available.
If finally managed to analyze the index process. In the end it turned out to be an issue when calculating the backlink counters and with an adapted query indexing is fast again.
Obviously i was horribly wrong all the time blaming the slow vps.
Unfortunately this is only a small step in the major overhaul of GUS.
More trouble along the way. Although the VPS hosting geminispace.info runs with 8 Gigs of RAM and does not serve other services, the index update got oom-killed. :(
Seems due to the continued growth of gemini we are hitting the same problems Natalie hit a few months ago on GUS. I'm currently unsure about the next steps.
It took almost ten days the last reindex to complete as i triggered a complete index. This was necessary after the cleanup as there is currently no incremental cleanup of the search index implemented.
The design of GUS - which clearly has never been meant to index such a huge number of capsules - and the slow VPS are doing no good currently to keep the index up to date. Unfortunately we are currently stuck with the VPS.
Currently there is no progress to be reported on the coding site. I'm busy with various other things and late in the evening i can't bother to tackle some of the obvious tasks to improve GUS. If you are interesting in helping out improving GUS/geminispace.info feel free to comment on one of the issue or drop me a mail.
I've made some manual cleanup of the base data the last days. This decreased the raw data size from over 3 GB to roughly 2 GB. Unfortunately a new mirror of godocs came online...another thing we need to exclude for the moment.
geminispace.info runs rather stable the last weeks, but i added it to my external status monitor anyway:
It will alert me if it goes down.
No news on the coding site currently. Other projects occupy the time that i can currently devote to tech stuff.
We'll have a few days off, i'll get back to some coding after that.
geminispace.info is now aware of more than 1000 capsules. Unfortunately this data is somewhat misleading: some of the capsules may already be gone, but GUS lacks a mechanism for invaliding old data.
I'll probably start with some manual cleanup the next days, so don't worry if numbers go down.
We are back on track with crawl and index, everything is up-to-date again.
I had to add another news and a wikipedia mirror to the exclude list. The current implementation can't handle such a huge amount of information well.
Obviously this didn't work as expected. For whatever reason indexing fails repeatedly on one or another page with a mysterious sqlite error. It may to a few days till i find enough time to search for the cause of this error.
If you are familiar with peewee and sqlite or have come across this issue earlier, let me know:
The index is currently a few days behind. It will hopefully catch up during the day.
From now on I will exclude any sort of news- or youtube-mirrors from the crawl without further notice.
For the sake of transparency i may add a section which mention what is excluded and why it is excluded. But this is not a high priority for me.
There are currently some issue during crawl that sometimes lead to n interruption. So it may take more then the usual 3 days until new content is discovered.
This will eventually be solved when the migration to PostgreSQL is done, unfortunately im quite busy with real life currently so it may take some time.
I started working on migrating the backing database to PostgreSQL instead of SQLite.
This may take a while, but it will eventually solve some of the problems that currently occur around crawling and indexing.
Not sure if i can keep the updates schedule set on every 3 days.
Current crawl is running for more than 24 hours now and it's still not finished yet.
The shady workaround is now in place - index updates won't block searches anymore.
This is even more important with the ongoing growth of geminispace - as of today there are more then 750 capsules we know about.
I'm currently working on a workaround to avoid the index update blocking search requests.
Unfortunately i broke the index during this...need to be more careful when doing maintenance.
I've made some adjustments on how GUS/geminispace.info uses robots.txt.
Previously we tried to honor the settings for *, indexer and gus user-agents. That didn't work out well with the available python libraries for robots parsing and GUS ended up crawling files it wasn't intended tto.
We now only use the settings for * and indexer, no special handling for GUS anymore. All indexers unite. ;)
The first fully unattended index update has happened last night.
There are still some rough edges to be cleaned, but we are on the way to have up-to-date search results without manual intervention.
geminispace.info has just been announced on the gemini mailing list.
geminispace.info is going public! Yeah! :)
test drive of instance search.clttr.info started
-- Response ended
-- Page fetched on Fri Aug 6 03:05:23 2021