Skip to content

Pain Points we solve

redfly.ai edited this page May 18, 2025 · 3 revisions

This is a list of pain points we solve taken from actual customer interviews. These are their words, not ours:

  • When there are too many transactions, you have to use a cache - even with MongoDB.

  • A small, insignificant query can cause major problems when there is high concurrency (like 100k calls in a second). There is nothing to diagnose or fix when the individual query returns in 10ms. This is where you miss to cache, and end up with problems.

  • Throughput is an issue because supply chain has a lot of data coming in.

  • Disk IO in SQL Server is high when the client starts dropping a lot of files.

  • SQL Server doesn't scale horizontally. It only scales vertically.

  • DynamoDB can scale to infinite or as deep as your pockets will go. You can throw as much data as you want at it. It will never even stutter. But, it's very expensive.

  • CRUD implementation requires a lot of work.

  • If you understand the internals, SQL Server can be optimized very easily to scale up. However you are always going to have that 200 ms lag when the page flashes before the data loads.

  • Sync can fail if lot of transactions are happening or we have updated bulk data.

  • When we are loading the data into the DW at the same time as people are reading the database, it can cause a deadlock or the query gets suspended. That means it is waiting for the resources. One option is to increase the resources and you upgrade the server. But it is very, very expensive. Too expensive to be viable.

  • Right now, we have a managed instance, general purpose, and it is 16 cores ($4k per month). Next level is the business one, which is almost double the cost.

  • We can reach a stage where performance improvement is no longer possible by database tuning.

  • We have created quite a few alerts. If a deadlock happens, we get an alert. If a job fails, we get an alert and do some troubleshooting. We have written a few stored procedures, which run every Saturday & Sunday to optimize the indexes. We need to keep checking because if you create too many indexes, it degrades your performance.

  • You can't have too many or too few indexes. A balance is needed. That is the biggest challenge for the database. You cannot reach perfection in that because your queries & data are always changing.

  • Databases are fairly effective in taking a one time hit for a query and caching for subsequent calls. However, it does not work very well for transactional data.

  • Cursor based logic is not cached.

  • Certain subqueries may end up running as if it is a cursor. The optimizer chooses to change the query path and it still does row-by-row processing in many cases.

  • Redis Enterprise brings a number of capabilities that are very useful, but customers can't use it because of cost.

  • There is no Redis library available that can help users simplify their Redis implementation.

  • Redis is faster than another database like MongoDB or Cassandra running on memory or NVMe drives because of the code.

  • You cannot do joins in Redis (you can with redfly).

  • Scaling SQL Server by adding hardware or spending more developer time on it is hard.

  • A cache is needed to store about a million words (few MB) per call which needs to be held on till the end of the user session. This is important when a user is interacting with an LLM Chatbot over multiple messages in a session.

  • It is really expensive to run Postgres in AWS. Even for a reasonably sized instance, it is hundreds of dolars per month. It is a big cost center.

  • This is a pain point for legacy apps where they have been running for so long that to make a change, the effort is humongous. It's going to take them a lot of time to get into the Microservices world.

  • If you use Spring Boot, it's easy enough to delegate to a secondary L2 cache with Redis or something else that will go in between. Your solution is more than that.

  • In Blockchain, the go-between layer of trying to read the ledger and understanding what our last events are which were recorded in some relational database and synchronizing these two pieces is another great use case for this product.

  • We use Redis almost on every app that we write. It is on the high transactional ones where we start to run into challenges.

  • The industry struggles with legacy applications where a lot of code change is needed to introduce caching. Sometimes it can take a week, sometimes a month. It would be great to have a proxy layer which can be introduced to access the cache transparently.

  • Redis does not work when we need to join two datasets (redfly does)

  • The capability to expire the cache via notification service is something most companies are looking for. The ability to refresh the record without TTL and expiration is very useful.

  • Seamless caching without the need for code change.

Clone this wiki locally