Lecture 1: Introduction to Research — [📝Lecture Notebooks] [
Lecture 2: Introduction to Python — [📝Lecture Notebooks] [
Lecture 3: Introduction to NumPy — [📝Lecture Notebooks] [
Lecture 4: Introduction to pandas — [📝Lecture Notebooks] [
Lecture 5: Plotting Data — [📝Lecture Notebooks] [[
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| // Based on https://web.dev/mediastreamtrack-insertable-media-processing/ | |
| // Uses Webcodecs API, which is supported only in Chrome as of November 2021 | |
| function convertI420AFrameToI420Frame(frame) { | |
| const { width, height } = frame.codedRect; | |
| // Y, U, V, Alpha values are stored sequentially. Take only YUV values | |
| const buffer = new Uint8Array(width * height * 3); | |
| frame.copyTo(buffer, { rect: frame.codedRect }); | |
| const init = { | |
| timestamp: 0, |
####Rets Rabbit http://www.retsrabbit.com
Rets Rabbit removes the nightmare of importing thousands of real estate listings and photos from RETS or ListHub and gives you an easy to use import and Web API server so you can focus on building your listing search powered website or app.
Here's an efficient way to load a dataset into Vertica by splitting it up into multiple pieces and then parallelizing the load process.
Note that this only makes sense if your Vertica cluster is a single node. If it's running more nodes, there are definitely more efficient ways of doing this.
For this example, the large CSV file will be called large_file.csv. If your file is under 1GB, it
probably makes sense to load it using a single COPY command.