One of the biggest updates introduced with macOS Tahoe was a turbo-charged Spotlight. Apple has fixed a number of trade-offs and added new benefits that will appeal to power users. Of course the usual “Sherlocking” ensued. But not all changes were well received.
For example, the death of the classic LaunchPad has drawn some criticism and spawned a slew of apps that bring the experience back. Likewise, the Spotlight upgrades borrowed heavily from productivity tools like Raycast and weren’t as widely praised as third-party alternatives.
Personally, I find the new Spotlight experience a little overwhelming and at the same time a bit functionally hollow. This is where Vector comes into play. It’s a minimalist Spotlight replacement designed by Ethan Lipnick – a former engineer at Apple – and offers some really cool AI-driven amenities.
What does Vector do?
At the most basic level, Vector wants to replace Spotlight’s broad role. Just like Spotlight, it lives in the Dock as a dedicated app icon, but you can also access it from the menu bar to reduce Dock clutter. So what can it do?
To get started, start an app. But instead of opening the entire Spotlight window that takes up the central area of the screen, you can launch Vector from any corner of the screen to keep things organized and not visually intrusive.
Speaking of flexibility, you can set custom keyboard shortcuts to bring up the main window, clipboard mode, or the emoji picker. Likewise, you can choose the most convenient keyboard combination to launch a file search or browse your chats in the built-in Messages app.
In Vector’s favor, not only does it look extremely cool, but the clean design and snappy animations also make it look like something designed by Apple. It actually feels faster than interacting with Spotlight, and search performance feels almost as fast as Spotlight’s too.
I noticed that the semantic search system (especially for the files stored on your system) gives faster results. The only limitation is that no preview is displayed when searching a file in the vector search window.
So if you have a bunch of files saved with names like ABC-1 and ABC-2, you’re effectively in a blind spot. Another small disadvantage is the search performance. By default, Vector runs a local AI model that is only 64MB in size. However, its search performance is nowhere near as good as Spotlight.
For example, when I entered the flight number into the search field, Spotlight automatically displayed the boarding pass with the number printed on it, but Vector failed. If you want better semantic search output, you’ll need to download the more powerful BGE-M3 model, which requires 1.1GB of storage.
Lots of hits, a few misses
The indexing process is quite obscure, although it seems to work fine with the files stored on the system. For example, I couldn’t reference my most recent chats with friends and family on the same date. But random service messages and code returned a valid result when searching the messages directory.
Semantic understanding is also a stroke of luck. For example, if I search for “definition of catharsis,” I get results from the dictionary app and Wikipedia. However, when I try a contextual search for content in PDF files, Vector fails.
It worked well when retrieving forecast information from the Weather app, but failed when retrieving details from the Calendar app, even when the queries were straightforward. Finding entries that should lead to a map view was pretty reliable, but once you got into natural language queries like “distance between Umpling and Laitumkhrah” it stuttered.
Vector draws on a variety of sources to answer your questions. The list includes everything from Calendar and Dictionary to Contacts, Maps, Weather, Wikipedia, Apps, Messages, Files and even the Emoji Deck. You can disable indexing (and the semantic search system) for each individual app in Vector. I like this flexibility because it not only ensures control over data protection, but also reduces the processing load.
On the bright side, I absolutely love the clipboard system. Summoning it will give you a sliding card carousel that’s buttery smooth to glide through. Another nice touch is that each card shows the app from which the content was copied alongside the date and/or approximate time.
Vector offers a lot of flexibility in how you interact with the app. You can use it solely as a full-fledged app launcher, deploy it as a system-wide semantic search system, or simply browse the built-in clipboard chops. In addition, you can choose between six positions where the vector window opens.
I’ve anchored it to the bottom right corner because it looks pretty neat and doesn’t interfere with the view of the foreground app windows. The clipboard system also allows you to set an automatic deletion history ranging from a day to a whole week or month. You can also choose to keep everything in the clipboard directory forever without worrying about security since all copied and pasted content is only processed and saved on the device.
Reading the space (for the silicon inside)
I’ve written repeatedly that Apple’s silicon is in a league of its own. Whether it’s Macs or iPhones, the cumulative balance between pure performance and efficiency is miles ahead. But despite this edge, Apple doesn’t let you tinker with the performance output.
On a Windows computer, you get native utilities like Armory Create (on Asus ROG computers) and third-party apps that provide granular control over everything from GPU frequency to fan speed. Even phones like the Red Magic 10S Pro and OnePlus 15 allow you to unlock its true potential by adjusting performance presets.
Vector doesn’t completely fix this problem for your Mac, but as long as you’re running the app, you can adjust its performance output. First, you can run the AI-powered functions in Vector solely on the CPU. However, if you want better performance, you can combine the CPU and GPU or push the CPU and neural chip (NPU) together for faster output.
And if you’re not worried about power consumption, the app also lets you distribute the workload across CPU, GPU and NPU at the same time. Apple’s M series processors are equipped with a fairly powerful neural engine. Therefore, the best combination for running Vector is to divide the workload between CPU and NPU.
If you prefer a beefier variant of silicon, such as the M4 Pro or M4 Max – both of which have more GPU cores – it’s worth choosing the performance profile that also includes the GPU. There are a few other modifications you can make based on the chip your Mac has (and the type of performance you’re looking for).
First, you can choose between the BGE Small model, which is only 64MB in size and runs as standard. This is very powerful for contextual searching in the local file container and messages. However, if you want better answers and support for more languages, you should look at the BGE-M3 model.
M3 here stands for multifunctionality, multilingualism and multigranularity, which is pretty self-explanatory in terms of the advantages of the model. This requires just over 1 GB of storage, but offers much better performance in retrieving contextual information and supports longer inputs of around six thousand words (8192 tokens).
You can separately set the speed at which content is indexed in the Messages app and File Explorer. The app runs completely offline, meaning none of the data stored on your Mac ever leaves the device. However, if you are still on the fence about the privacy aspect, you can disable indexing (and semantic search) for messages and the file container separately.
From a functional perspective, Vector is extremely responsive, well designed and carefully executed. The only area that needs improvement is semantic search and understanding. This is beyond the developer’s understanding. And this can be fixed either by fine-tuning the underlying AI model or by deploying a smarter AI model.
Currently you cannot load an AI model of your choice. I would have liked to try one of Google’s Gemma series models or the models in the DeepSeek and Qwen families. Additionally, it would have been amazing to use specific AI models for specific tasks. For example, contextual image search would require multimodal AI to produce the best results.
There are already many open source models on Hugging Face that can do this. My experience running SMoL-VLM2 on the iPhone 16 Pro for visual identification (even from the camera feed) was quite a rewarding experience. If you’re looking for a low-stakes, minimal-fuss Spotlight alternative, Vector fills that gap pretty well overall. Only in some areas does the underlying AI brains let it down.




