Thaw could release Cold War-era U.S. toxic waste buried under Greenland's ice

OSLO Global warming could release radioactive waste stored in an abandoned Cold War-era U.S. military camp deep under Greenland's ice caps if a thaw continues to spread in coming decades, scientists said on Friday.Camp Century was built in northwest Greenland in 1959 as part of U.S. research into the feasibility of nuclear missile launch sites in the Arctic, the University of Zurich said in a statement.Staff left gallons of fuel and an unknown amount of low-level radioactive coolant there when the base shut down in 1967 on the assumption it would be entombed forever, according to the university.It is all currently about 35 meters (114.83 ft) down. But the part of the ice sheet covering the camp could start to melt by the end of the century on current trends, the scientists added."Climate change could remobilize the abandoned hazardous waste believed to be buried forever beneath the Greenland ice sheet," the university said of findings published this week in the journal Geophysical Research Letters. The study, led by York University in Canada in collaboration with the University of Zurich, estimated that pollutants in the camp included 200,000 liters (44,000 UK gallons) of diesel fuel and the coolant from a nuclear generator used to produce power."It's a new breed of political challenge we have to think about," lead author William Colgan, a climate and glacier scientist at York University, said in a statement. "If the ice melts, the camp's infrastructure, including any remaining biological, chemical, and radioactive wastes, could re-enter the environment and potentially disrupt nearby ecosystems," the University of Zurich said.The study said it would be extremely costly to try to remove any waste now. It recommended waiting "until the ice sheet has melted down to almost expose the wastes before beginning site remediation." There was no immediate comment from U.S. authorities. (Reporting By Alister Doyle; Editing by Andrew Heavens) Read more

China regulator says Didi, Uber deal will need Mofcom approval

BEIJING A merger between Chinese ride-hailing firm Didi Chuxing and the China unit of U.S. rival Uber could face its first hiccup after China's commerce ministry (Mofcom) said on Tuesday it had not received a necessary application to allow the deal to go ahead.Didi's acquisition of Uber's China operations, announced on Monday, will create a roughly $35 billion ride-hailing giant and could raise monopoly concerns as Didi claims an 87 percent market share in China. Uber China is the second largest player.Mofcom, one of China's anti-trust regulators, said at a news briefing that the two firms need to seek approval for the deal to go ahead. It had been unclear previously whether such a filing would be required as both firms are loss-making in China."Mofcom has not currently received a merger filing related to the deal between Didi and Uber," ministry spokesman Shen Danyang said. "All transactors must apply to the ministry in advance. Those that haven't applied won't be able to carry out a merger" if they fall under applicable anti-trust and merger rules, he said.Didi Chuxing did not immediately respond to a request for comment. Uber did not respond to requests for comment. Didi and Uber have been in a fierce battle in China, spending billions of dollars to subsidize rides and win users.Other players, however, could step up competition. Jia Yueting, head of LeEco, the parent of smaller ride-hailing rival Yidao, said in a social media post the firm would offer steep rebates to attract passengers to help avoid there being a monopoly in the market."Yidao will soon kick off an even more aggressive cashback campaign," according to a translation of Jia's posting provided by a LeEco spokeswoman. Regulations released last week that take effect on Nov. 1 legitimize ride-hailing, but prohibit services from offering rides below cost. (Reporting by Jake Spring, Paul Carsten and Li Zimu, Norihiko Shirouzu and Beijing monitoring team; Editing by Ian Geoghegan) Read more

Solar plane circles globe in first for clean energy

ABU DHABI A solar-powered aircraft successfully completed the first fuel-free flight around the world on Tuesday, returning to Abu Dhabi after an epic 16-month voyage that demonstrated the potential of renewable energy.The plane, Solar Impulse 2, touched down in the United Arab Emirates capital at 0005 GMT (0405 local time) on Tuesday.It first took off from Abu Dhabi on March 9, 2015, beginning a journey of about 40,000 km (24,500 miles) and nearly 500 hours of flying time.Bertrand Piccard and Andre Borschberg, the Swiss founders of the project, took turns piloting the aircraft, which has a wingspan larger than a Boeing 747 but weighs no more than an average family car."More than an achievement in the history of aviation, Solar Impulse has made history in energy," Piccard, who piloted the plane on the last leg, told a large crowd on landing. "I’m sure that within the next 10 years we’ll see electric airplanes carrying 50 passengers on short- to medium-haul flights," he said in a statement.He said the technologies used on Solar Impulse 2 could be used on the ground in daily life to halve emissions of carbon dioxide, the main greenhouse gas blamed for climate change.The propeller-driven aircraft's four engines are powered by energy collected from more than 17,000 solar cells built in the wings. Excess energy is stored in batteries. Unfavorable weather at times hindered smooth flying, causing the plane to be grounded for months in some countries. In all, the plane had 16 stopovers.The pilots also had to demonstrate the mental stamina required to tackle vast distances alone at a cruising speed of no more than 90 km (56 miles) per hour and altitudes of up to 9,000 meters (29,500 feet)."We were facing the oceans... We had to build up this mindset, not just the plane and technology," Piccard told reporters. For the two pilots, landing back where they started is only "the beginning of the continuation" of a longer journey, said Piccard, who in 1999 became the first person to circumnavigate the globe non-stop in a hot air balloon.Aside from continuing to promote renewable energy, they plan to launch an international council to advise governments and develop new applications for clean energy technology. (Reporting by Stanley Carvalho, editing by Sami Aboudi and John Stonestreet) Read more

Evolution of Linux Containers and Future

Linux containers are an operating system level virtualization technology for providing multiple isolated Linux environments on a single Linux host. Unlike virtual machines (VMs), containers do not run dedicated guest operating systems. Rather, they share the host operating system kernel and make use of the guest operating system system libraries for providing the required OS capabilities. Since there is no dedicated operating system, containers start much faster than VMs.Image credit: Docker Inc.Containers make use of Linux kernel features such as Namespaces, Apparmor, SELinux profiles, chroot, and CGroups for providing an isolated environment similar to VMs. Linux security modules guarantee that access to the host machine and the kernel from the containers is properly managed to avoid any intrusion activities. In addition containers can run different Linux distributions from its host operating system if both operating systems can run on the same CPU architecture.In general, containers provide a means of creating container images based on various Linux distributions, an API for managing the lifecycle of the containers, client tools for interacting with the API, features to take snapshots, migrating container instances from one container host to another, etc.Container HistoryBelow is a short summary of container history extracted from Wikipedia and other sources:1979 — chrootThe concept of containers was started way back in 1979 with UNIX chroot. It’s an UNIX operating-system system call for changing the root directory of a process and it's children to a new location in the filesystem which is only visible to a given process. The idea of this feature is to provide an isolated disk space for each process. Later in 1982 this was added to BSD.2000 — FreeBSD JailsFreeBSD Jails is one of the early container technologies introduced by Derrick T. Woolworth at R&D Associates for FreeBSD in year 2000. It is an operating-system system call similar to chroot, but included additional process sandboxing features for isolating the filesystem, users, networking, etc. As a result it could provide means of assigning an IP address for each jail, custom software installations and configurations, etc.2001 — Linux VServerLinux VServer is a another jail mechanism that can be used to securely partition resources on a computer system (file system, CPU time, network addresses and memory). Each partition is called a security context, and the virtualized system within it is called a virtual private server.2004 — Solaris ContainersSolaris Containers were introduced for x86 and SPARC systems, first released publicly in February 2004 in build 51 beta of Solaris 10, and subsequently in the first full release of Solaris 10, 2005. A Solaris Container is a combination of system resource controls and the boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance.2005 — OpenVZOpenVZ is similar to Solaris Containers and makes use of a patched Linux kernel for providing virtualization, isolation, resource management, and checkpointing. Each OpenVZ container would have an isolated file system, users and user groups, a process tree, network, devices, and IPC objects.2006 — Process ContainersProcess Containers was implemented at Google in year 2006 for limiting, accounting, and isolating resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. Later on it was renamed to Control Groups to avoid the confusion multiple meanings of the term “container” in the Linux kernel context and merged to the Linux kernel 2.6.24. This shows how early Google was involved in container technology and how they have contributed back.2007 — Control GroupsAs explained above, Control Groups AKA cgroups was implemented by Google and added to the Linux Kernel in 2007.2008 — LXCLXC stands for LinuX Containers and it is the first, most complete implementation of Linux container manager. It was implemented using cgroups and Linux namespaces. LXC was delivered in liblxc library and provided language bindings for the API in Python3, Python2, Lua, Go, Ruby, and Haskell. Contrast to other container technologies LXC works on vanila Linux kernel without requiring any patches. Today LXC project is sponsored by Canonical Ltd. and hosted here.2011 — WardenWarden was implemented by CloudFoundry in year 2011 by using LXC at the initial stage and later on replaced with their own implementation. Unlike LXC, Warden is not tightly coupled to Linux. Rather, it can work on any operating system that can provide ways of isolating environments. It runs as a daemon and provides an API for managing the containers. Refer to Warden documentation and this blog post for more detailed information on Warden.2013 — LMCTFYlmctfy stands for “Let Me Contain That For You”. It is the open source version of Google’s container stack, which provides Linux application containers. Google started this project with the intention of providing guaranteed performance, high resource utilization, shared resources, over-commitment, and near zero overhead with containers (Ref: lmctfy presentation). The cAdvisor tool used by Kubernetes today was started as a result of lmctfy project. The initial release of lmctfy was made in Oct 2013 and in year 2015 Google has decided to contribute core lmctfy concepts and abstractions to libcontainer. As a result now no active development is done in LMCTFY.The libcontainer project was initially started by Docker and now it has been moved to Open Container Foundation.2013 — DockerDocker is the most popular and widely used container management system as of January 2016. It was developed as an internal project at a platform-as-a-service company called dotCloud and later renamed to Docker. Similar to Warden, Docker also used LXC at the initial stages and later replaced LXC with it’s own library called libcontainer. Unlike any other container platform, Docker introduced an entire ecosystem for managing containers. This includes a highly efficient, layered container image model, a global and local container registries, a clean REST API, a CLI, etc. At a later stage, Docker also took an initiative to implement a container cluster management solution called Docker Swarm.2014 — RocketRocket is a much similar initiative to Docker started by CoreOS for fixing some of the drawbacks they found in Docker. CoreOS has mentioned that their aim is to provide more rigorous security and production requirements than Docker. More importantly, it is implemented on App Container specifications to be a more open standard. In addition to Rocket, CoreOS also develops several other container related products used by Docker and Kubernetes: CoreOS Operating System, etcd, and flannel.2016 — Windows ContainersMicrosoft also took an initiative to add container support to the Microsoft Windows Server operating system in 2015 for Windows based applications, called Windows Containers. This is to be released with Microsoft Windows Server 2016. With this implementation Docker would be able to run Docker containers on Windows natively without having to run a virtual machine to run Docker (earlier Docker ran on Windows using a Linux VM).The Future of ContainersAs of today (Jan 2016) there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg and Omega container cluster management platforms for running Google applications at scale. More importantly, Google has contributed to container space by implementing cgroups and participating in libcontainer projects. Google may have gained a huge gain in performance, resource utilization, and overall efficiency using containers during past years. Very recently Microsoft, who did not had an operating system level virtualization on the Windows platform took immediate action to implement native support for containers on Windows Server.Docker, Rocket, and other container platforms cannot run on a single host in a production environment, the reason is that they are exposed to single point of failure. While a collection of containers are run on a single host, if the host fails, all the containers that run on that host will also fail. To avoid this, a container host cluster needs to be used. Google took a step to implement an open source container cluster management system called Kubernetes with the experience they got from Borg. Docker also started a solution called Docker Swarm. Today these solutions are at their very early stages and it may take several months and may be another year to complete their full feature set, become stable and widely used in the industry in production environments.Microservices are another groundbreaking technology rather a software architecture which uses containers for their deployment. A microservice is nothing new but a lightweight implementation of a web service which can start extremely fast compared to a standard web service. This is done by packaging a unit of functionality (may be a single service/API method) in one service and embedding it into a lightweight web server binary.By considering the above facts we can predict that in next few years, containers may take over virtual machines, and sometimes might replace them completely. Last year I worked with a handful of enterprises on implementing container-based solutions on a POC level. There were few who wanted to take the challenge and put them in production. This may change very quickly as the container cluster management systems get more mature. Read more

Gatling Tool Review for Performance Tests (Written in Scala)

Have you heard of Gatling for performance tests? It seems to be a relatively new tool (created in 2012, so pretty new), that has recently been gaining certain popularity (250,000 downloads in four years, 60,000 of those in the last three months, meaning it has been gaining attention from the community). So that you don’t have to dedicate too much time out of your day to learn more about this tool, I wrote this review to sum up some of the tests I ran with it. Hopefully, within just a few minutes, this gatling tool review will give you a good idea of what you can do with it. As there are hardly an articles about the topic in Spanish, this a translation of my original post (written in espanõl!).Key features of Gatling:●  Tool for performance testing●   Free and opensource (developed in Java / Scala)●   The scripting language is Scala, with its own DSL●   It works with whichever operating system and any browser●   It supports HTTP/S, JMS, and JDBC protocols●   Colorful reports in HTML●   It doesn’t allow you to distribute the load between machines, but it can execute its tests in different test clouds. It can scale using or Taurus with BlazeMeter (Taurus provides many facilities for continuous integration)It’s a great tool for when:●   You need to simulate less than 600 concurrent users. This is just a reference number, depending on how much processing your simulation script has, but if it needs to generate more, then you will have to pay for a tool in the cloud. A colleague told me that he managed to execute a script with 4,000 concurrent users with a simple script from just one machine.●   You want to learn about performance tests (it’s very simple and the code is very legible)●   You are interested in maintaining the test code (the language, Scala, and the Gatling’s DSL are pretty focused on facilitating the maintainability of the tests, which is ideal if you are focusing on continuous integration).This tool allows you to carry out a load simulation of concurrent users against a system through the HTTP/S, JMS, or JDBC protocols. The most typical scenario of when you want to use this tool is to simulate users of a web system in order to analyze the bottlenecks and optimize it. For comparison, some very popular alternatives on the market are JMeter and HP Load Runner (to name one opensource tool and one commercial, both are widely used).Gatling is a free and opensource tool. It works on Java, thus it’s suitable for all operating systems. It requires the JDK8 (it’s not enough with the runtime, we need the development kit).The tool has two executables: one to record the tests and the other to execute them. The tests are recorded in Scala, which is a very clean and easy to read language, even upon looking at it for the first time. After each execution, you get a colorful and wordy report.Fundamental Aspects for the Correct Simulation of UsersThe scripts count on fundamental aspects for the correct simulation of users, which for our consideration are:●   Handling of protocol (from the invocations and responses, to the management of headers, cookies, etc.)●   Handling of strings, facilities to parse, regular expressions, and including, localization of elements for xpath, json path, css, and more●   Validations, being that we need to check that the responses are correct●   Parametrization from different sources of data (here I see a very strong point of this tool, since it offers various, easy alternatives to use)●   Handling of dynamic variables, known as variable correlation●   Handling of different scopes of the variables (level of threads, tests, etc.)●   Modularization (facilitating the maintainability and legibility of the scripts) ●   Handling waits (to simulate think times)●   Metrics management (response times, individual ones and group ones, transactions per second, amount of concurrent users, errors, amount of transferred data, etc)●   Management of errors and exceptions●   Flow control (loops, if-then-else)What other things do you consider in the moment of evaluating the scripting language of a load or stress simulation tool?Gatling ReportsRegarding the reports, they are very colorful and complete. Here I’d like to highlight that its reports:●   Are in HTML with easy navigation, with an index and organized●   Graphically show the information in a well grouped and very well-processed and related way●   Include a graphic of the quantity of virtual users during the test●   You can zoom in on the graphics to focus and analyze them with more detail in certain areas.●   Graph the requests per second and the responses per second, including the comparison of the quantity of active users●   You can see each request in detail, in order to refine your analysis.●   Separate the response times for the ones that were “ok” and the ones that failed●   Handle of the concept of percentiles●   Have a log of errors foundWhat other things do you deem important when evaluating the reports of a stress or load simulation tool?In short, we at Abstracta are big fans of Gatling. We are starting to use it in projects as we have received several requests from clients to use it. In the future, I am sure that it will continue to be an important item in our continuous integration toolshed.Have you used Gatling? How does it measure up for you? Read more

Older Post