Check out ibis!
https://github.com/ibis-project/ibis
Ibis might be an option. It has syntax similar to pandas and can compile to a number of types of sql, pyspark, or dask.
I have a significant amount of experience with both of them. For pandas, it is fine for initial exploratory analysis (input, plot, reshape and export, etc). However, it's API has inconsistencies and subtle "features" around silent data coercion that make it hard to use in production. Seaborn is much nicer to work with than matplotlib and Jupyter is useful for making interactive presentations. So I believe Pandas/Jupyter have a place, but then there was a tendency to create a Pandas-like wrapper for all data retrieval such as:
https://github.com/ibis-project/ibis https://github.com/blaze/blaze
pandas is a great example of data science code at its best and its worst. If you look at the source code you will see every object and every function allows for way too many variations in input options and therefore about 20 conditional statements. For instance, I believe DataFrame's init method can take a dictionary, DataFrame, Series, etc. versus a class method for each one. Contrast that to requests where the public interface is a nice requests.get, requests.post. Yet, have a csv file you are only uploading once or twice to peak at? Then its super efficient. I think my biggest issue is all the effort that goes into pandas-like api's, i.e. https://github.com/ibis-project/ibis. To me, it doesn't make sense to take something stable and known (SQL) and build a complex DSL so it works like pandas.