Design a command line utility to get list of all PRs in given time interval (last hour, day, 3 weeks, 6 months).
The list should include:
- all merged PRs in the specified interval
| You are Manus, an AI agent created by the Manus team. | |
| You excel at the following tasks: | |
| 1. Information gathering, fact-checking, and documentation | |
| 2. Data processing, analysis, and visualization | |
| 3. Writing multi-chapter articles and in-depth research reports | |
| 4. Creating websites, applications, and tools | |
| 5. Using programming to solve various problems beyond development | |
| 6. Various tasks that can be accomplished using computers and the internet |
| from cgroups import Cgroup | |
| from cgroups.user import create_user_cgroups | |
| import os | |
| import subprocess | |
| try: | |
| # setup cgroup directory for this user | |
| user = os.getlogin() | |
| create_user_cgroups(user) |
Hello H2O community,
there are many new changes in H2O ecosystem and we are working furiously to publish and share them with the community.
In this context, we are preparing a new H2O release 3.12 with amazing features (e.g., AutoML, XGBoost support) and planning some changes which can affect existing code bases. This email would like to inform and start discussion about them.
The changes include:
The goal of this assignment is to:
H2O provides implementation of the PCA algorithm which depends on the Jama library. The library is used for several tasks including Singular Value Decomposition (SVD). However, the library also introduces sub-optimal performance.
| import org.python.core.PyRunnableBootstrap; | |
| import org.python.core.CodeBootstrap; | |
| import org.python.core.CodeLoader; | |
| import org.python.core.PyFunction; | |
| import org.python.core.Py; | |
| import org.python.core.PyObject; | |
| import org.python.core.ThreadState; | |
| import org.python.core.PyFrame; | |
| import org.python.core.PyCode; | |
| import org.python.compiler.Filename; |
| $SPARK_HOME/bin/spark-submit \ | |
| --master "local[*]" \ | |
| --class water.SparklingWaterDriver \ | |
| --packages ai.h2o:sparkling-water-examples_2.11:2.0.0 \ | |
| --executor-memory=8g \ | |
| --driver-memory=8g \ | |
| --conf spark.driver.extraJavaOptions="-XX:MaxPermSize=256m" \ | |
| --conf spark.executor.extraJavaOptions="-XX:MaxPermSize=256m" \ | |
| --conf spark.ext.h2o.node.log.level=INFO \ | |
| --conf spark.ext.h2o.client.log.level=INFO \ |
| public ModelsV3 importModel(int version, ModelImportV3 mimport) { | |
| ModelsV3 s = (ModelsV3) Schema.newInstance(ModelsV3.class); | |
| try { | |
| List<Key> importedKeys = new ObjectTreeBinarySerializer().load(FileUtils.getURI(mimport.dir)); | |
| Model model = (Model) importedKeys.get(0).get(); | |
| s.models = new ModelSchema[1]; | |
| s.models[0] = (ModelSchema) Schema.schema(version, model).fillFromImpl(model); | |
| } catch (IOException e) { | |
| throw new H2OIllegalArgumentException("dir", "importModel", e); |
| //It depends what is your next processing step, i | |
| // f you need to have data in (a) Spark DataFrame or you are happy with data in (b) H2OFrame. | |
| // (a) We need data in DataFrame | |
| val reconstructionError = dlModel.scoreAutoEncoder(train, Key.make) | |
| val df: DataFrame = h2oContext.asDataFrame(reconstructionError)(sqlContext) | |
| val joinedFrame = testData.zip(df) // This is Spark DataFrame | |
| // Note: you need Spark 1.3, or 1.4 | |
| // (b) We need data in H2OFrame | |
| val testDataH2OFrame: H2OFrame = h2oContext.asH2OFrame(testData); |