A webcrawler is how Google and other search engines get all their data.
It's a bot that downloads all the webpages that it can find (such as every thread and post in the forum), and then does something with the data it has gathered. Google for example uses the gathered data to provide search results. So if you try to google
rhyme and punishment toot it uses this data to provide a link to
Toot's profile in forums.rnp-moonglade.net forum.
Normally the webcrawlers should be reasonably polite and not hammer servers too much. But it looks like this damn SEMrush bot just keeps downloading the entire forum over and over, day after day, causing tens of bloody gigabytes of traffic. So now I've ordered the server to just deny access to it, so that it just gets a tiny error message everywhere, instead of big forum pages.