Well, I think I got to the bottom of it. After fixing the “too many open files” I ran into the main issue I believe with running my schema. OOM.
Read about troubleshooting OOM from that same link as above and saw that it recommends 7-8Gb (t2.large+ on AWS) and I was running on a t2.micro (1Gb). I guess that would work for smaller schemas, but not for our needs. Now back to deciding where to go from here to keep costs down and not drive up a bill. I am still deterred from Slash being a more permanent solution because of the lack of pricing info. I used 12 of my free credits and still haven’t loaded any actual data on it. Based on our normal user load I estimate that we would use ~3-10 K credits/day
Edit: Yep, reconfigured as a t3.large (8Gb) and the schema built and was done before I could even get the logs open