Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








23,310 Hits in 3.7 sec

opML: Optimistic Machine Learning on Blockchain [article]

KD Conway, Cathie So, Xiaohang Yu, Kartin Wong
2024 arXiv   pre-print
The integration of machine learning with blockchain technology has witnessed increasing interest, driven by the vision of decentralized, secure, and transparent AI services.  ...  In this context, we introduce opML (Optimistic Machine Learning on chain), an innovative approach that empowers blockchain systems to conduct AI model inference. opML lies a interactive fraud proof protocol  ...  DNN Computation in Multi-Phase opML In this demonstration, we present a DNN computation in a two-phase opML approach: • The computation process of Machine Learning, specifically Deep Neural Networks (DNN  ... 
arXiv:2401.17555v2 fatcat:mjnor3j3sbbz3kc4qfqipthstm

Distributed Deep Learning: From Single-Node to Multi-Node Architecture

Jean-Sébastien Lerat, Sidi Ahmed Mahmoudi, Saïd Mahmoudi
2022 Electronics  
Local parallelism is considered quite important in the design of a time-performing multi-node architecture because DDL depends on the time required by all the nodes.  ...  During the last years, deep learning (DL) models have been used in several applications with large datasets and complex models.  ...  The problem is that by doing this, the network communication is intensified. For example when the neural network is divided [28] on several machines.  ... 
doi:10.3390/electronics11101525 fatcat:eemwzpv4ifh4xlgvbx7hknsu5y

In-database distributed machine learning

Sandeep Singh Sandha, Wellington Cabrera, Mohammed Al-Kateb, Sanjay Nair, Mani Srivastava
2019 Proceedings of the VLDB Endowment  
In this demonstration, we give a practical exhibition of a solution for the enablement of distributed machine learning natively inside database engines.  ...  Machine learning has enabled many interesting applications and is extensively being used in big data systems.  ...  Audience will experience the entire machine learning workflow presented in Figure 3 . The audience will be free to vary the number of iterations or the number of nodes in neural network model.  ... 
doi:10.14778/3352063.3352083 fatcat:uz7cagmlpzcg5mbmxt75xurguu

Energy-based Graph Convolutional Networks for Scoring Protein Docking Models [article]

Yue Cao, Yang Shen
2019 bioRxiv   pre-print
Directly learning from structure data in graph representation, EGCN represents the first successful development of graph convolutional networks for protein docking.  ...  In this study the two challenging problems in protein docking are regarded as relative and absolute scoring, respectively, and addressed in one physics-inspired deep learning framework.  ...  ACKNOWLEDGEMENTS This work was supported by the National Institutes of Health (R35GM124952).  ... 
doi:10.1101/2019.12.19.883371 fatcat:5t2qjbyaczhtthjsvdph4tjwue

Analysis of Intrusion Detection and Classification using Machine Learning Approaches

Anjum Khan, Anjana Nigam
2017 INTERNATIONAL JOURNAL ONLINE OF SCIENCE  
This paper discusses some usually used machine learning techniques in Intrusion Detection System and conjointly reviews a number of the prevailing machine learning IDS proposed by researchers at different  ...  Analysis showed that application of machine learning techniques in intrusion detection might reach high detection rate.  ...  [12] in like manner connected a multi-level model with various machine learning procedures, for example, C5, MLP, and Naïve Bayes.  ... 
doi:10.24113/ijoscience.v3i10.13 fatcat:lp56zcmjlzduxipfvyqhskuhky

Snap ML: A Hierarchical Framework for Machine Learning [article]

Celestine Dünner, Thomas Parnell, Dimitrios Sarigiannis, Nikolas Ioannou, Andreea Anghel, Gummadi Ravi, Madhusudanan Kandasamy, Haralampos Pozidis
2018 arXiv   pre-print
The framework, named Snap Machine Learning (Snap ML), combines recent advances in machine learning systems and algorithms in a nested manner to reflect the hierarchical architecture of modern computing  ...  We evaluate the performance of Snap ML in both single-node and multi-node environments, quantifying the benefit of the hierarchical scheme and the data streaming functionality, and comparing with other  ...  *Trademark, service mark, registered trademark of International Business Machines Corporation in the United States, other countries, or both. ** Intel Xeon is a trademarks or registered trademarks of Intel  ... 
arXiv:1803.06333v3 fatcat:l75n5irwuzcatgwrw5jhgvxro4

Energy-based Graph Convolutional Networks for Scoring Protein Docking Models [article]

Yue Cao, Yang Shen
2019 arXiv   pre-print
Directly learning from 3D structure data in graph representation, EGCN represents the first successful development of graph convolutional networks for protein docking.  ...  In this study the two challenging problems in protein docking are regarded as relative and absolute scoring, respectively, and addressed in one physics-inspired deep learning framework.  ...  ACKNOWLEDGEMENTS This work was supported by the National Institutes of Health (R35GM124952).  ... 
arXiv:1912.12476v1 fatcat:v5yguzbbi5dajo5juh6hjouowu

Navigating the maze of graph analytics frameworks using massive graph datasets

Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Jiwon Seo, Jongsoo Park, M. Amber Hassaan, Shubho Sengupta, Zhaoming Yin, Pradeep Dubey
2014 Proceedings of the 2014 ACM SIGMOD international conference on Management of data - SIGMOD '14  
Implementing graph traversal, statistics and machine learning algorithms on such data in a scalable manner is quite challenging.  ...  In this work, we offer a quantitative roadmap for improving the performance of all these frameworks and bridging the "ninja gap".  ...  The native code uses MPI message passing [6] to drive the underlying fabric (FDR InfiniBand network in our case) for high bandwidth and low latency communication among the nodes.  ... 
doi:10.1145/2588555.2610518 dblp:conf/sigmod/SatishSPSPHSYD14 fatcat:l5mcilp3ujezbliklr4h2pwrfe

MPI applications' performances in native vs. virtualized environments using InfiniBand IPoIB virtualization and live migration

2015 Tehnički Vjesnik  
This paper presents a face-to-face native-to-virtual HPC MPI application performance analysis on a range of general-use HPC applications by virtualizing InfiniBand via IPoIB at the guest virtual machine  ...  architecture on a proactive fault-tolerance use case.  ...  The guest operating system in the virtual machines, running on top of KVM and ESXi was configured within the same environment as the native nodes' using Puppet configuration management.  ... 
doi:10.17559/tv-20140819115232 fatcat:uzhp7s2xtff4lnembrqcofbwyq

Hyper: Distributed Cloud Processing for Large-Scale Deep Learning Tasks [article]

Davit Buniatyan
2019 arXiv   pre-print
Training and deploying deep learning models in real-world applications require processing large amounts of data.  ...  The system implements a distributed file system and failure-tolerant task processing scheduler, independent of the language and Deep Learning framework used.  ...  Also would like to thank AWS for providing cloud resources for experiments. The project was funded and supported by Snark AI, Inc.  ... 
arXiv:1910.07172v1 fatcat:5qlpul5yqzfyrhfgallvnyj4ne

Towards Quantum-Enabled 6G Slicing [article]

Farhad Rezazadeh, Sarang Kahvazadeh, Mohammadreza Mosahebfard
2022 arXiv   pre-print
The quantum machine learning (QML) paradigms and their synergies with network slicing can be envisioned to be a disruptive technology on the cusp of entering to era of sixth-generation (6G), where the  ...  In this intent, we propose a cloud-native federated learning framework based on quantum deep reinforcement learning (QDRL) where distributed decision agents deployed as micro-services at the edge and cloud  ...  The decision agents are deployed in edge nodes while federation layers at the cloud node in our Kubernetes infrastructure connected to 5G segments.  ... 
arXiv:2212.11755v1 fatcat:kth3nhn245fjdjia5ngk7nwhje

A Data-Centric Optimization Framework for Machine Learning [article]

Oliver Rausch, Tal Ben-Nun, Nikoli Dryden, Andrei Ivanov, Shigang Li, Torsten Hoefler
2022 arXiv   pre-print
Rapid progress in deep learning is leading to a diverse set of quickly changing models, with a dramatically growing demand for compute.  ...  The pipeline begins with standard networks in PyTorch or ONNX and transforms computation through progressive lowering.  ...  Guided Optimization Case Study: EfficientNet In this case study, we consider the EfficientNet-B0 [57] of the network.  ... 
arXiv:2110.10802v3 fatcat:obpwqoanuzdcdf7j7mgh33fzdu

Secure Self Optimizing Software Defined Framework for NB-IoT Towards 5G

Anita Sethi, Shelendra Kumar Jain, Sandip Vijay
2020 Procedia Computer Science  
In this article self-optimizing framework for ultra IoT is proposed, which represents the M2M communication, network cloud.  ...  In this article self-optimizing framework for ultra IoT is proposed, which represents the M2M communication, network cloud.  ...  To enhance the customer experience machine learning cloud native solution with reduction in cost and congestion, ultra-traffic optimization heuristic optimizes the traffic flows on congested cells and  ... 
doi:10.1016/j.procs.2020.04.298 fatcat:urm7c2i7dbectjeodzkjsfulg4

First Scalable Machine Learning Based Architecture for Cloud-native Transport SDN Controller

Carlos Manso, Noboru Yoshikane, Ricard Vilalta, Raül Muñoz, Ramon Casellas, Ricardo Martínez, Cen Wang, Filippos Balasis, Takehiro Tsuritani, Itsuro Morita
2021 Zenodo  
We present a cloud-native architecture with a machine learning QoT predictor that enables cognitive functions in transport SDN controllers.  ...  We evaluate the QoT predictor training and auto-scaling capabilities in a real WDM/SDM testbed.  ...  Acknowledgments Work supported by the EC H2020 TeraFlow (101015857) and Spanish AURORAS (RTI2018-099178-I00) and Ministry of Internal Affairs and Communications, Japan grant number JP MI00316.  ... 
doi:10.5281/zenodo.5087536 fatcat:63yq3yjearcw3gfxhwej5bsor4

A Scalable and Cloud-Native Hyperparameter Tuning System [article]

Johnu George, Ce Gao, Richard Liu, Hou Gang Liu, Yuan Tang, Ramdoot Pydipaty, Amit Kumar Saha
2020 arXiv   pre-print
In this paper, we introduce Katib: a scalable, cloud-native, and production-ready hyperparameter tuning system that is agnostic of the underlying machine learning framework.  ...  We present the motivation and design of the system and contrast it with existing hyperparameter tuning systems, especially in terms of multi-tenancy, scalability, fault-tolerance, and extensibility.  ...  ., number of clusters in k-means clustering, learning rate, batch size, and number of hidden nodes in neural networks) cannot be learnt during the training process, unlike the value of model parameters  ... 
arXiv:2006.02085v2 fatcat:msydmwpvzne4rlhjvmc5sxiike
« Previous Showing results 1 — 15 out of 23,310 results