Is it possible to get a web archive to crawl and archive a layer of links? All i have found is to follow the links and make new archives and documents which i’d rather not do.
I ask because its not uncommon for me to want to archive a research article and they may link away to references like graphs etc that are common for context, and should those links rot, the original archive isnt as useful.