Keeping Full History with the API

I'm looking to pull all of my requests records out of the pi to keep in a separate database.
Currently I was setting up to use mySQL and was using node-red to initiate the pull based on a timer.
I've been trying to get the API to work, but I've run into some problems.

  • I can't filter the admin/api.php?getAllQueries route. https://discourse.pi-hole.net/t/pi-hole-api/1863 The API doc says there are "undocumented filters" but as far as I can tell from github none of these filters are exposed in the php route.This isn't 100% blocking but it would be nice to only request records I don't already have. GitHub Places of interest: PHP FTL
  • Even if I could filter for the times I wanted I would have no way to identify duplicates. The internal system is keeping a unique ID but the API doesn't expose it. The records themselves are not unique, over a 2 hour period I have about 10-15% "duplicate" records same [timestamp,IP,request,RequestType(IPv4/IPv6),responseType(enumeration)]

Has anyone solved these issues? Google search didn't turn up too much.
Maybe I would have better luck trying to grab data out of the sqlite database the pi is running for itself?
Or maybe there is a log that I haven't found that I could parse and write to my DB?

Thanks in advance for the help!

P.S. First post!

This is just what I was going to suggest to you. This will be the most simple way of doing it and will give you immediate access also to the UUIDs, hence duplication prevention will be a piece of cake.

If you still want to do it through the API, we can walk through the available possibilities but I can already assure you that nothing will come close to the ease of just accessing our SQLite database.

Possible ways of limiting the results from the API would be, e.g.
http://pi.hole/admin/api.php?getAllQueries&from=1525723800&until=1525724400
where from and until are Unix timestamps.

2 Likes

Wow, I swear I tried that! I must have had a typo or something... but I just tried it and it's working so !THANKS!

On the note of using the SQlite DB that's already in place I'll probably switch over to that, I just need to read up on the SQLlite syntax/system.

Do you know if there are any gotchas? (like, Are they ever dumping the records? Is it installed in some weird directory?)

It very similar and probably in most aspects identical to the mySQL syntax.

Yes, the database gets cleared every now and then to keep only queries that are not older than 365 days to limit endless growth of the database. This setting is configurable and could have been changed by the user.

It it located in /etc/pihole/pihole-FTL.db whereas this location is also configurable and could have been changed by the user.

When a user decides to flush the database it may be emptied out. However, with the strong privacy level implementation that will come up with Pi-hole v4.0, I have high hopes that log flushing will not be needed any longer.

1 Like