dimanche 2 octobre 2016

Comprendre OAuth2

Qu’est-ce que OAuth2 ?

OAuth2

OAuth2 est, vous l’aurez deviné, la version 2 du protocole (appelé aussi framework) OAuth.

Ce protocole permet à des applications tierces d’obtenir un accès limité à un service disponible via HTTP par le biais d’une autorisation préalable du détenteur des ressources. L’accès est demandé par ce qu’on appelle « un client », et qui peut être un site internet ou une application mobile par exemple. Si les ressources n’appartiennent pas au client, alors ce dernier doit obtenir l’autorisation de l’utilisateur final, sinon il peut directement obtenir l’accès en s’authentifiant avec ses propres identifiants.

La version 2 est censée simplifier la version précédente du protocole et à faciliter l’interopérabilité entre les différentes applications.

Les spécifications sont toujours en cours de rédaction et le protocole évolue sans cesse mais cela ne l’empêche pas d’être plébiscité et implémenté par de nombreux sites tels que Google ou Facebook.

Notions de base

Les rôles

OAuth2 définit 4 rôles bien distincts :
  • Le détenteur des données (Resource Owner) : généralement vous-même.
  • Le serveur de ressources (Resource Server) : serveur qui héberge les données dont l’accès est protégé (par exemple Google qui stocke les informations de votre profil).
  • Le client (Client Application) : une application demandant des données au serveur de ressources (cela peut être votre application PHP côté serveur, une application Javascript côté client ou une application mobile par exemple).
  • Le serveur d’autorisation (Authorization Server) : serveur qui délivre des tokens (ou jetons) au client. Ces tokens seront utilisés lors des requêtes du client vers le serveur de ressources. Ce serveur peut être le même que le serveur de ressources (physiquement et applicativement), et c’est souvent le cas.
Lire l'article complet : http://www.bubblecode.net/fr/2016/01/22/comprendre-oauth2/

mercredi 28 septembre 2016

Next Generation of Statistical Tools to Be Developed for the Big Data Age

Lancaster University (09/21/16) 

Researchers at Lancaster University's Data Science Institute and the University of Cambridge's Statistical Laboratory in the U.K. are leading a program called StatScale, which is developing a new generation of statistical tools for the purpose of extracting insights from big data. "The ubiquity of sensors in everyday systems and devices...means there is enormous potential for societal and economic benefit if information can be extracted effectively," says Lancaster professor Idris Eckley. "The volume, scale, and structure of this contemporary data poses fundamentally new and exciting statistical challenges that cannot be tackled with traditional methods. Our aim is to develop a paradigm-shift in statistics, providing a new statistical toolbox to tackle, and capitalize on, these huge data streams." Cambridge professor Richard Samworth says the StatScale project will devise the underlying theoretical and methodological bases for next-generation scalable statistical algorithms. Engineering and Physical Sciences Research Council CEO Tom Rodden says the tools stemming from the project are needed to reliably interpret big data to yield economic and societal benefits. The techniques and models that emerge from StatScale will be piloted by industrial partners such as Shell U.K. and the Office for National Statistics so they can be rapidly tested and polished in real-world scenarios.

lundi 5 septembre 2016

The JavaMail API

The JavaMail API provides a platform-independent and protocol-independent framework to build mail and messaging applications. The JavaMail API provides a set of abstract classes defining objects that comprise a mail system. It is an optional package (standard extension) for reading, composing, and sending electronic messages.

visit : www.tutorialspoint.com/javamail_api

mardi 21 juin 2016

Parallel Programming Made Easy

MIT News (06/20/16) Larry Hardesty 

Massachusetts Institute of Technology (MIT) researchers have developed Swarm, a chip design that should make parallel programs more efficient and easier to write. The researchers used simulations to compare Swarm versions of six common algorithms with the best existing parallel versions, which had been individually engineered by software developers. The Swarm versions were between three and 18 times as fast, but they generally required only one-tenth as much code. The researchers focused on a specific set of applications that have resisted parallelization for many years, and many of the apps involve the study of graphs, which are comprised of nodes and edges. Frequently, the edges have associated numbers called "weights," which often represent the strength of correlations between data points in a dataset. Swarm is equipped with extra circuitry specifically designed to handle the prioritization of the weights, and it time-stamps tasks according to their priorities and begins working on the highest-priority tasks in parallel. Higher-priority tasks may engender their own lower-priority tasks, but Swarm automatically slots those into its queue of tasks. Swarm also has a circuit that records the memory addresses of all the data its cores currently are working on; the circuit implements a Bloom filter, which stores data into a fixed allotment of space and answers yes or no questions about its contents.

lundi 6 juin 2016

Finding Relevant Data in a Sea of Languages

MIT News (05/27/16) Ariana Tantillo; Dorothy Ryan 

Researchers in the Massachusetts Institute of Technology Lincoln Laboratory's Human Language Technology (HLT) Group seek to address the challenge of providing multilingual content analysis amid a shortage of analysts with the necessary skills. Their work could potentially benefit law enforcement and the U.S. Department of Defense and intelligence communities. The HLT team is exploiting innovations in language recognition, speaker recognition, speech recognition, machine translation, and information retrieval to automate language-processing tasks so the available linguists who analyze text and spoken foreign languages are more efficiently utilized. The team is concentrating on cross-language information retrieval (CLIR) using the Cross-LAnguage Search Engine (CLASE), which enables English monolingual analysts to help look for and filter foreign language documents. The researchers use probabilistic CLIR based on machine-translation lattices. The method entails documents being machine-translated into English as a lattice containing all possible translations with their respective probabilities of accuracy. Documents containing the most likely translations are extracted from the collection for analysis, based on an analyst's query of a document collection; CLIR results are assessed according to precision, recall, and their harmonic average or F-measure. Meanwhile, HLT's Jennifer Williams is developing algorithms to identify languages in text data so CLASE can select the appropriate machine translation models, and others are working on automatic multilingual text-translation systems.

mercredi 4 mai 2016

Big Data a definition

Big Data a definition

Big data is an evolving term that describes any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information.
Big data has also been defined by the four Vs: https://www.oracle.com/big-data/index.html

Volume.

The amount of data. While volume indicates more data, it is the granular nature of the data that is unique. Big data requires processing high volumes of low-density, unstructured Hadoop data—that is, data of unknown value, such as Twitter data feeds, click streams on a web page and a mobile app, network traffic, sensor-enabled equipment capturing data at the speed of light, and many more. It is the task of big data to convert such Hadoop data into valuable information. For some organizations, this might be tens of terabytes, for others it may be hundreds of petabytes.

Velocity.

The fast rate at which data is received and perhaps acted upon. The highest velocity data normally streams directly into memory versus being written to disk. Some Internet of Things (IoT) applications have health and safety ramifications that require real-time evaluation and action. Other internet-enabled smart products operate in real time or near real time. For example, consumer eCommerce applications seek to combine mobile device location and personal preferences to make time-sensitive marketing offers. Operationally, mobile application experiences have large user populations, increased network traffic, and the expectation for immediate response.

Variety.

New unstructured data types. Unstructured and semi-structured data types, such as text, audio, and video require additional processing to both derive meaning and the supporting metadata. Once understood, unstructured data has many of the same requirements as structured data, such as summarization, lineage, auditability, and privacy. Further complexity arises when data from a known source changes without notice. Frequent or real-time schema changes are an enormous burden for both transaction and analytical environments.

Value.

Data has intrinsic value—but it must be discovered. There are a range of quantitative and investigative techniques to derive value from data—from discovering a consumer preference or sentiment, to making a relevant offer by location, or for identifying a piece of equipment that is about to fail. The technological breakthrough is that the cost of data storage and compute has exponentially decreased, thus providing an abundance of data from which statistical analysis on the entire data set versus previously only sample. The technological breakthrough makes much more accurate and precise decisions possible. However, finding value also requires new discovery processes involving clever and insightful analysts, business users, and executives. The real big data challenge is a human one, which is learning to ask the right questions, recognizing patterns, making informed assumptions, and predicting behavior.

dimanche 10 avril 2016

These Are the Cities Where Tech Workers Live Largest

USA Today (04/07/16) John Shinal

Annual data released by the U.S. Bureau of Labor Statistics demonstrates the value of an education in the science, technology, engineering, or math fields. Workers employed in computer and math occupations in the cities with the most technology employees earned yearly salaries about 50 percent to 75 percent higher than the overall workforce. Seattle tech workers, for example, had a mean salary of $108,350, or 78 percent more than the $61,000 earned by all workers there. That was the highest tech-worker premium in the 10 largest hubs, followed by Dallas-Fort Worth, Houston, and Austin. The same is true in the burgeoning tech hub of Oakland, CA, where workers in computer and math occupations were paid 70 percent more. Computer and math occupations in Los Angeles, Philadelphia, San Jose, and San Francisco all earn more than 60 percent more than their non-tech counterparts. Although Washington, D.C., is among the largest tech-employing regions, its tech workers had the smallest salary differential, at 54 percent, likely due to the large numbers of federal government workers. Among tech occupations, software developers and systems analysts were the highest in number in nearly all of the largest tech hubs, surpassing computer programmers, network and database administrators, computer research scientists, and computer-support specialists.
http://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-ee0cx1e13ex02603&

mercredi 6 avril 2016

The Java™ Tutorials : Aggregate Operations

The Java Tutorials are practical guides for programmers who want to use the Java programming language to create applications. They include hundreds of complete, working examples, and dozens of lessons. Groups of related lessons are organized into "trails". (The full tutorials https://docs.oracle.com/javase/tutorial/)

Let us stress this week on

Aggregate Operations:

Prerequisite:  Lambda Expressions and Method References.

Then follow the Aggregate Operations tutorial

jeudi 31 mars 2016

Lebanese Java Users Group, new updates

A Lebanese Java Users Group updates...

First time git config

The first thing you should do when you install Git is to set your user name and email address. This is important because every Git commit uses this information, and it’s immutably baked into the commits you start creating:
$ git config --global user.name "Pascal E Fares"
$ git config --global user.email forgit@cofares.net
You need to do this only once if you pass the --global option, because then Git will always use that information for anything you do on that system. If you want to override this with a different name or email address for specific projects, you can run the command without the --global option when you’re in that project.

mercredi 30 mars 2016

Testing to Start for Computer With Chips Inspired by the Human Brain

The Wall Street Journal (03/28/16) Robert McMillan 

The Lawrence Livermore National Laboratory (LLNL) on Thursday will begin testing a $1-million computer packed with 16 IBM TrueNorth microprocessors designed to mimic the functions of the human brain. Bundled into each TrueNorth chip are 5.4 billion transistors comprising a network of 1 million simulated neurons connected by a massive web of synapses. "TrueNorth is useful for deep-learning applications and for a broader class of machine-learning applications as well," says LLNL researcher Brian Van Essen. TrueNorth emulates the brain's low power consumption, with the 16 chips using only 2.5 watts together versus a typical server chip's power requirements of up to 150 watts. Van Essen's team will test TrueNorth by uploading some supercomputing tasks to it. Van Essen expects the system to help the lab filter out potential glitches in simulations of phenomena such as subatomic particle interactions and identify patterns in cybersecurity and video surveillance. "It's great that they're [testing TrueNorth]," says University of Washington professor Luis Ceze. "It's very efficient, but they have to show that the accuracy of the models that they implement [is] good enough."

samedi 12 mars 2016

How To Secure MySQL Replication Using SSH ....

Configuring MySQL replication leaves port 3306 open to the Internet and data between the replication servers is not encrypted. Using SSH tunneling MySQL replication data can be transferred through the SSH connection...

lundi 7 mars 2016

Hadoop Tutorial: Intro To Hadoop Developer Training

Object-Oriented Meets Functional (SCALA)

Have the best of both worlds. Construct elegant class hierarchies for maximum code reuse and extensibility, implement their behavior using higher-order functions. Or anything in-between.

mardi 2 février 2016

Learn Git and GitHub without any code!


a branch
Get Started with github (EN)

Nous préparons une version vidéo et en français dès que terminé la présentation sera publié sur ce site.