[10:17:56] dcausse: and since we're talking about code reviews: https://gerrit.wikimedia.org/r/c/wikimedia-event-utilities/+/1090885 [10:18:29] gehel: I did not forget about this one I promess :) [10:31:56] errand+lunch [10:40:24] :) [15:19:43] Hi, I started working with jupyter (via hub) to query iceberg tables via spark. If want to export the contents of a dataframe, how do I know where df.write.csv(‘some/path’) ends up (physycially)? It’s not the stat machine I use to tunnel into jupyter hub, at least I can’t find that file there. [15:27:20] I guess I have to write to HDFS [15:35:12] pfischer: yes better to write to hdfs as a single partition and then use hdfs dfs -text hdfs:///patch_to_file.csv [15:39:08] dcausse: Thanks! If I use an absolute path (hdfs:///analytics-hadoop/users/pfischer…) for writing, I do get an error: Permission Denied. Using hdfs://analytics-hadoop/users/pfischer… works, but hdfs dfs -ls analytics-hadoop/users/pfischer says that no such directory exists. [15:40:47] pfischer: without the hdfs:// scheme it should "hdfs dfs -ls /user/pfischer" I think [15:41:31] Ah, TIL… Thank you once more! [15:42:14] AFAIK it can be: hdfs://analytics-hadoop/user/pfischer, hdfs:///user/pfischer or /user/pfischer [17:52:52] !issync [17:52:53] Syncing #wikimedia-search (requested by JJMC89) [17:52:54] Set /cs flags #wikimedia-search wmopbot +t [17:52:56] Set /cs flags #wikimedia-search Az1568 -AFRefiorstv [18:28:13] dinner