Rating:

As every hash is based on the previous hash, we can rewrite a hash-function to a generator for all possible lengths from 1-100 that yields the hashes for every substring of length 1..100, so we only have to run this for every 100-length substring for every offset 0..1000 in the 20 files. This means just `1000*100*20 iterations`

For the lookup-table we use a python-dict of hash -> cleartext.

Then we iterate the list of hashes, recover the plaintext, concatenate them and hash this again. With this hashing, we have to throw away all the intermediate results of the hash function, as we only care for the final hash.

```python
mod1 = int(1e9 + 7)
mod2 = int(1e9 + 9)

hashtable = dict()

def ha(s): # modified original hash function to yield the hash-tuples
h1 = 0
h2 = 0
for i in range(len(s)):
h1 += (ord(s[i]) - 96) * pow(31, i, mod1)
h1 %= mod1
h2 += (ord(s[i]) - 96) * pow(31, i, mod2)
h2 %= mod2
yield (h1, h2), s[:i+1] # yields hashes for str[:1], str[:2]....

for fn in range(20): # read all files
print("processing", fn)
fc = open(f"a/{fn}").read()
for i in range(len(fc)): # read all 1000x100 sub-strings
for has, cleartext in ha(fc[i:i+101]): # generate all possible hashes
hashtable[has] = cleartext

# read file lines to tuples, and then look them up in the hashtable
concat = "".join([hashtable[tuple(map(int, line.split(" ")))]
for line in open("hashes.txt").read().strip().split("\n")])

for has, _ in ha(concat): pass # get last element of iterator

# The answer for this is the multiplication of output of both the functions for the concatenated string. (Wrap the number around flag{})
print("flag{",has[0]*has[1],"}", sep="")
```