Apple M1 version

It would be nice if Devonthink was compiled and optimised for M1. It could use the neural engine of he M1 for a better performance and more artificial intelligence,

DEVONthink is a universal app which runs natively on Apple silicone. When are you experiencing performance issues which would justify - if I understood your post correctly - developing a fork solely for M1? The same question goes for the AI; what specifically would you want/expect the M1 fork to do differently?

Forking comes at a cost - parallel upkeep of 2 versions of an app means less time available for new features, or support, or something.

3 Likes

I have an MBA M1 (16 GB RAM, 1 TB disk) and an MBP (16 GB RAM, 1 TB disk, 2021) and M1 running DT is not faster, it runs DRAMATICALLY faster. A complex search that takes 10 seconds in my MBP, is almost immediate in my MBA, same for opening databases, etc…

2 Likes

My M1 Air is also an absolute monster for being a thin laptop (though it also feels more rigid than previous ones). :heart:

DEVONthink definitely enjoys the experience :slight_smile:

1 Like

I absolutely agree. But is the code of Devonthink native for M1 or makes it use of Rossetta?
And makes it use of the AI of the M1 chip?

It’s a universal app, so it is native to M1 and does not use Rosetta. I’ll wager a bet that the AI does not use specific features of the neural engine on the M1 (of course, only DT can answer that for sure). But my question remains: what is it you’re wanting DT to do differently than it does?