Saturday, June 29, 2013

The danger of license plate scanning systems and other unauthorized databases

The license plate scanning systems allow you to build databases that are used to draw the path of a vehicle within a geographical area where there is a certain minimum amount of surveillance cameras connected to the system capture / scan / ANPR.

That way, you can achieve extremely valuable metadata, and technological infrastructure necessary to obtain relatively cheap and accessible in Argentina.

The problem is that they are a gray area to regulate: a database that stores information publicly available (tuition, is visible from anyone on the street), but accumulates extremely sensitive metadata: because the State has the information and the ability to associate patent data (owner, model, etc..), usual routes traveled by a vehicle that has given tuition.

In Argentina the gray area occurs in this type of database "derived" is not entirely governed by the law, at most could be legally challenged at some point, if data traffic / travel is reached using as evidence in a trial.

Similarly, this notion is preferably avoided by any means, and tour metadata are mainly used for research and obtaining other evidence of crimes, less controversial and / or with the potential to be challenged legally, but even trigger a public controversy leading to regulate a gray area that today is not regulated.

Examples of possible problems:

Most of these databases use "internal" in the security forces are accessed by operators who are not specialists, and many rigid and strict procedures hardening of access must be followed in order to ensure that access to metadata tracking enrollments remain available only as specifically authorized - and eventually auditable - for this, and not made available to third parties interested eyes, and worse, third parties from which there is no knowledge and no one is going to be audited.

The problem is given in the current databases are immersed in highly connected systems, and a database that few or almost no one is aware is particularly appropriate for unauthorized access, precisely because many users of networks and systems could eventually be connected to the database, have no true idea of ??the importance of being wrong and / or minimal skipping hardening rules.

For example, are classic examples of de-securitization of facilities and secure computer networks include:

- USB ports in the access terminals to the database, where anyone can connect:

* USB Keychain: they may contain trojans built to work in disconnected networks (with "air gap", which do not have any Internet network link) and go riding infrastructure "dropboxes" data (which will keep the stolen information , which will be transferred to key the next time you connect), but directly go installing backdoors that try to connect to the Internet.

* USB devices via 3G or Wifi connectivity: thus, an alleged safe immediately becomes a network connected to the Internet, with the number of potential problems that this entails.
etc.

- To allow access to the facility smartphones: hacking possibilities are from few to total in the case of a targeted attack that is using smartphones as "post" to achieve circumvent the "air gap" that keeps the network safe Base disconnected.

- Let there be no surveillance, automatic constant video recording for operators: A security measure extremely simple to use (almost all medium sized supermarkets hereinafter implemented in their collection boxes), if absent, generates deshusada the possibility of invalidating all other measures, for example, an operator simply because if lost or stolen credentials (ideally 2-factor: credenciales-smartcards/contraseña), that way if anyone else - including other authorized operator - used access, there is no way to identify the person who impersonó the operator. etc.

Conclusions

That is, the potential for misuse of databases with sensitive metadata is great, and have no criminal problems - yet - simply because very few people - ordinary citizens and "civilian" unrelated mainly security forces - is to both the potential of these tools in the usual political games, economic, socio-economic, etc..

As "players" sufficiently motivated and well funded require access - legally or illegally - to metadata, we may see the first cases - public - misuse of these databases.

If Argentina is almost a habit which complete records appear - privadísimos and very restricted access - photocopied out of court, it is expected that given the superlative value of the metadata (as opposed to a "simple" file for example), eventually give rise to important interests move toward trying to access them illegally, which effectively will be much easier if people totally ignore the danger and the real value of these databases.

Tuesday, June 18, 2013

XBOX ONE in june
Today we will discuss the microsoft xbox one, the new generation console that comes with microsoft polemics.

Technical characteristics are known, eight cores AMD, AMD Radeon graphics card, 8GB RAM DDR3 and 500GB internal storage. It also has Blu ray reader and is accompanied by kinetic 2 which comes with an upgrade to play in low light.

The completely redesigned, more square, perfectly straight lines, it has gone from a game console into a multimedia center for our home. According to some PS4 is focused on ONE XBOX Gamers and the whole family.

Some of the connection are to be connected to electric power, HDMI, USB 3.0 positions, optical audio, ethernet connection, kinect and an infrared port is not yet known what is its usefulness.

What is the controversy of this console? to start, we will have to connect once every 24 Hs Internet, one of the negatives is that games can not be delivered to more than one person at once. Something is rumored that will have regional limit, ie if you buy a game in Japan may not be used in Europe.

What it is going to allow the sale of second-hand, but only in approved.

Hand in hand with this console will come out some games like dead rising 3 spark project, forza motorsport 5, halo spartan assault.

This console cost Many euros, it is estimated that will be released in November. In this case includes the kinect.

Monday, May 20, 2013

Opensource monitoring rapid testing, the 2013 update of capabilities

It was a busy week so far, I've been re-examining the status of different monitoring solutions based on opensource soft, and since Monday I deployando Nagios, Icinga, Ganglia, Cacti, OpenNMS and Zabbix, and Sensu'm installing now.

 Basically OpenNMS is what worked best out-of-the-box, carrying only a couple of hours the first complete installation + configuration (the 1st. Was the test too), and then with a few clicks, a self-discovery well swept detecting range of my network devices testing. Set thresholds and messaging was a bit more work, another hour of work, reading some documentation rather confusing and confused mails requests for assistance from users and vague answers. Sure, the solution was quite simple and intuitive ... after having operated the first time. Basically it works quite well but is a bit unstable in the sense that additional plugins deployar fast - track apt for example - will not end up always in a completely stable OpenNMS, and can have a service running perfectly for hours after installing plugins, and then the first system reboot, something triggerea boot failure, and error messages are basically the output dump of Java VM, and rarely contain useful information for recovery (the "profile "Response of the forums / lists that aims lot OpenNMS advanced users are very familiar with identifying" parts "of the soft setting to change / fix directly looking JVM dump rather than search / find that information in some documentation).

For example, to install the DHCP monitoring plugin, configure it, and then uninstall it, let the software unable to boot due to lack of binaries to start the service, in this case was lucky and the error message clearly indicated that the failure was because I could not start the DHCP service monitoring, and the solution was simply to return to comment on the service in the corresponding configuration file (and thus the attempt to start it off at the start of OpenNMS).

Cacti was very easy to install, use Cacti is so simple that almost no one bothered to create tutorials on how to add a device and potentially generate (and order), the graphics, but .. "simple" does not imply that it is fully intuitive, and I was a good half hour playing with the GUI to understand the workflow to add devices and generate graphical information, reason deployar basis for Cacti (anyway, apparently, OpenNMS is generating "exactly" the same information, but of course , must navigate several menus to find, in Cacti it has in view after login directly).


Ganglia is always my first choice to gather information and use performance servers, mainly because it installs quickly, requires no more than installing the software server, the soft client, and "hook" in configuration (you have to tell the client soft , the agent on the server ganglia to monitor, what is your - or their - server to accept communications). After installing Ganglia and leave taking information, I began to review the other options and half way to be pretty determined to implement OpenNMS and Cacti, Ganglia already had armed graphic profiles of my testing systems.

A Zabbix installed it in minutes (and a couple of agents as well), and the GUI is very attractive although it is not as intuitive as the OpenNMS (which is not too intuitive either), anyway I placed the capability of self- triggereé rapid discovery and discovery, which failed to capture a single device on the same network range that had loaded two hours earlier in OpenNMS (where the latter soft and perfectly detected my test servers and devices, snmp data included). So I went to the GUI to find documentation without further explanation of the procedure added - "intuitive" - ??devices, hence I find in forums and mailing lists, finding nothing back, I guess it is so easy that anyone bother to write a step-by-step, so I left without configuring Zabbix running for now (to find how to add devices). Similarly, at each stroke of Google I keep finding recommendations that Zabbix is ??"very easy", I guess you refer to the installation, but I have to devote more than the 20 minutes I spent in order to conclude something about the software ( and be able to charge at least a couple of test devices). If it does work, you might have a little utility that OpenNMS inclusive.


Nagios (and Icinga, in my first contact with the soft, I used my expertise in Nagios and I could configure / manage without any issue, so the portability of skills I can confirm) is what I left to prove in the end, it is tempting the desire to produce the software easier and faster set deployar, this does not always mean that the software is reasonably easy to manage on a day to day (well, in the case of Nagios IS easy to manage), and / or that scales very well even in the medium term.

Nagios does not scale at all well in dynamic environments where servers production up and down constantly, and the basic measure of this is to implement nagios to monitor clouds environments, however, if you implement Nagios in virtualized environments, quickly see how your servers only and stable production are constantly monitored, while the other servers that are plucked and off dynamically, even though they are in production, while slowly being left behind Nagios configurations only dedicated to control the servers that are running continuously without downtimes dynamic .

Besides there is the temptation to integrate Nagios + Cacti, Nagios Royal Decrees + Nagios + whatever, a solution that will quickly stop correctly reflect the true performance profile total virtualized environments, of course, unless you choose to handle the architecture so your servers "fixed" in production are always working on certain hypervisors and others, but dynamically torn production / off and of testing (that are created and deleted regularly), are limited to other hypervisors.

Mmm, there is a problem in that precisely the possibility of using idle capacity in hypervisors virtualize is reason in principle, so "limit" the focus of virtualized infrastructure for one (1) software does not have capabilities to "follow "the dynamism of virtual infra take excess capabilities in virtualization solution. Consider for example that the power limitation is dramatically when run on servers virtualized infras complex configurations (which define hierarchies for dynamic off hypervisor VMs under certain performance profiles, for example).

Sure, Nagios can be "adapted" to dynamic scenarios, but those settings will be static (basically you could "play" with schedulear downtimes scheduled downtimes match the estimable that VMs will take off when the servers virtualized infra ), with the result that on one hand we set the virtual infra automatically to fit on the other hand we have to deal with (re) adapt manual / software configuration continuously monitoring for servers.

Almost none of these conclusions is new (see the links monitoring-sucks), or use commercial software is the solution (it has the same limitations to adapt to dynamic infrastructures in general), and not that the same thing happens with the rest software I tried: OpenNMS, Cacti, Ganglia, etc.

I lack Groundwork and HypericHQ test (similar to Zabbix, commercial, but at least opensource or freeware) and see how they behave. I find it funny how the pages of all monitoring software sold say they are the best or something like this:-D> "The World's Largest Monitoring Web Applications"

Tuesday, May 7, 2013

Complete IT solutions and an example with vSphere virtual infrastructure

This article tell you how stole. solutions "complete" are not, and you have to complete them to really fulfill the purpose for which they were designed. There are also comments on the areas of responsibility of third party IT providers and suppliers regarding internal IT organization with its main client (the organization).


Good solutions, but partial
Infrastructure is common to see that when buying an IT solution, the company agrees to perform work, comes to the company / organization, does the work and then leaves. Leaving outstanding guarantees, for a time, under certain conditions, etc..

For example, installing an infrastructure vSphere hypervisors are installed, mount the vCenter server is added to vCenter hypervisors, virtual machines are deploya some - probably not, and ready (up there is the work agreed with the supplier IT service in this example). The customer then takes the baton from there, managing all infrastructure - now virtual. Installing, migrating operating systems from physical to virtual, etc.. etc.

The spot price of infrastructure work and its limits are essential, but the supplier will take over to Infinity any question related to what you installed / configured initially.


Complete IT Solutions
Now, the case of the areas of internal systems in the organization is rather different. Each area internal IT organization is required to sustain the continuity of the infrastructure over time, long-term.

What is very different from commercial IT supplier obligation, however it is common for the internal IT solutions are implemented in an organization "one-time", then they are left "as is" and without taking into prerogative account maintenance and continuous improvement (which is stole. a requirement of the job for internal IT employees in the area by the way).

Following the example of vSphere infrastructure, some steps after the "simple" installation and configuration of vSphere virtual infra could be (more or less in order of strategic importance-technical):

1) Implement automated backup vCenter Configuration (and backend DB)

2) Implement the automated backup ESXi configuration,

3) Deployar (buy stole.) Virtual backup solution (Veem, etc..) To the virtual machines themselves,

4) Implement automated check settings (remove all settings in vSphere, dump a GIT or the like, then go doing it regularly, to have an accurate central record of each configuration change), AKA "configuration management".

5) Implement virtual infrastructure monitoring (several ways)

6) Deployar one vSphere Update Manager (to keep all hypervisors updated / patched),

7) Implement High Availability for vCenter (ie mount another vCenter server, any of the several possible ways),

8) Implement required maintenance automation for vCenter (tip: the DB backend needs attention at times).

9) How to proceed and what to do just from the technical to recover the fall / crash / out of service any component of vSphere virtual infrastructure (including having installed and configured the tools, plans, and that there will be any recovery, have done internships and field tests to know that all policies / procedures / tools actually work as they should).

If you notice, extrapolating the general idea of ??the example, basically any infrastructure needs (plus installation, configuration and start initial production):

- Backup,
- Configuration Management,
- Monitoring and Optimization / Maintenance / Continuous Improvement.
- Add redundancy / additional resilience (as part of the continuous improvement)
- Action plan for disaster recovery.

Without all these details (and several others not mentioned), the solution can "crash" very easily and stop working properly, and with some bad luck also unexpectedly (eg New Year morning, 3 am, call from the owner of the company IT staff, dropping to 3.10 when the personnel using the system will warn that just does not go. "Use Cases" guard clinic, pharmacy guard, security company, polícia, etc.).

* This is a matter of opinion, but to complete more than the TCO of the solution, you could add the forecast / estimate future costs of lifecycle management, for example, by providing a platform migration.

Following the example foresee a possible / eventual migration path VMware vSphere 5.1 (+ ESXi) to Microsoft Hyper-V 2012 + System Center 2012 Virtual Machine Manager.

For example: having to buy a SAN "now":
- Increases the TCO of the solution vSphere, but
- Lower the TCO of the - possible future - Hyper-V 2012 solution, but
- Stole. lowers the TCO of the solution "Virtual Infrastructure"
(Which is what matters to the organization actually), and therefore generates a "migration path" acceptable, and concludes that buy the SAN "be good" :-)

Areas and limited times
Internal IT areas have an area of ??interference and obligations to the IT infrastructure by far much greater than almost any solution "turnkey" that can provide a third party, as even with the best available budget, the scope of interference by an outsourced IT provider always - but always - is limited to certain tasks and obligations, and a range of time - engaged - during which he will respond to the client. And after which, it will no longer have an obligation to respond to the client.

The internal IT area otherwise not limited at all of its obligations to the organization, which must respond by organizational commitment (ie, regardless of who / is are integrating the area as employees / managers), so continuous , and is responsible for completing and correcting any limitations that exist in the infrastructure.

Following the example in the solution which "turnkey" has not provided a backup mechanism for ESXi hypervisors. If the provider does not, it is the duty of the internal IT area complete the solution.


The IT provider's contractual obligation, always has a practical limit: the maximum time hired and how much work can be done during that time. Although and though they usually hire:
- "Solutions",
- "Turnkey solutions",
- "Solutions",

and other good IT vendor jargon, though is "promised" the solutions provided by a third party will never be able to be fully complete, but only will be hired in accordance with (a tasks list contained in the contract) , any additional work, paid or not is at the discretion and goodwill of the third party provider.

Directly ... unless they are permanently contracted to do the work of the internal area IT ... ooops, but the contract also has a maximum, so no, you can not sustain unlimited outsourcing, there will always be that pay more or additional services outsourcing to have an unlimited (so it is very good business indeed.

Monday, April 29, 2013

Start the first mass deployment of Openstack (soft infrastructure services based on free)

Bah, not the first, but the first to be done in a couple of well-known companies (eBay, PayPal), and at the level of several tens of thousands of servers simultaneously.

We started ..
The enterprise-wide success of free software comes primarily from two sources:
- Be technically capable and viable as the soft option owner.- Be Free to use, having no license cost.
Spinning fine on TCOThe TCO (Total Cost of Ownership, TCO) includes several costs more than a license of the software (the administrators pagás will you install / manage the software for example), however given an infrastructure large enough, licenses the cost becomes much larger portion in the TCO than usual in smaller infrastructure.
In companies like PayPal and Ebay, with around 80,000 servers, the cost to pay for soft licenses will necessarily be quite high for the number of servers.
So when there is a - any - Free soft option, technically reliable, companies with large infrastructure - and a couple of important notions of finding a good balance vs. resource investment. yield savings in license purchase - to implement projects quickly start and stop paying licenses, when at that time there is - or would be - essential to.
The news below comments as PayPal and Ebay started a pilot project 10,000 migrating servers from VMware to OpenStack, with the possible idea - if all goes well - 80,000 migrating servers at some point and stop paying licenses for the use of soft virtualization.
As we walk with the TCO over hereIt is important to emphasize something, given the typical TCO, businesses / organizations seek to always have the least amount of IT specialist staff (which themselves on a common minimum, usually a team of 4 to 6 people), and VMware warrants that can manage a large virtual infrastructure - say, 10-50 physical nodes, with 20-100 virtual machines - with minimal staff and for staff fairly reasonable cost (since it is a market standard, many IT professionals have today certain management skills in VMware, and still hiring outside specialists also higher level entails stole. exceptionally high cost, as if it would be - probably - see complex software with few options of consulting).
So not many companies / organizations are even looking to replace their existing time solutions based on VMware (however expensive they are), including something similar can be said of other options already implemented (Hyper-V, Xen, KVM, etc..) .
And for organizations that are looking to leave sometime Vmware (so expensive it is just now), there are other commercial options extremely viable (and very sought after, in my opinion):
- From the technical (skills reused because the notions of configuration / management are similar to those found in Vmware), and- Since the economic (free hypervisors, or included with the operating system license, additional tools available and reasonably priced: centralized managers, some appliance such as antivirus software and virtualized networking, etc..) Highlighted in particular: Windows Server 2012 ( Hyper-V), RHEL and SLES (KVM and Xen), Citrix (Xen).
Close, not yet, but ...In other words, it is a very good sign starting Openstack massive deployments outside of existing, which was limited almost exclusively to cloud providers infrastructure (IASS): Rackspace, HP, etc.. But there is still some time to see OpenStack competing head to head the typical market - in Argentina, for example - with other offers infrastructure virtualization software, such as today when considering a server OS and compared RHEL, SLES, Windows Server, etc..
Sure, the pioneers who have enough on their computers IT skillset to anticipate, will have the potential for much faster approach to implement virtual infrastructure based on free software, with the many benefits that stole. have: not having to budget tens of thousands of U $ S of cost re-licensing in five-year cycles, and that budget to spend on other needs of IT and / or business / organization.

Saturday, April 20, 2013

FORUMS ADMINISTRATOR TO COMMUNITY MANAGER
First, we must understand that a forum is created to obtain opinions on
a particular topic; has a general theme that guides each proposed specific debate. An online forum allows the administrator to define multiple subforums site on a single platform, which act as containers starting discussions users.
Of course, other users can answer and begun in discussions or start a new, as they see fit. Internet forums can be classified into those requiring register to participate and those which can provide anonymously.
In the first type, users choose a nickname, which will associate a password and probably an email to confirm your desire to join the forum. Typically, members have certain advantages, like being able to customize the appearance of the site, posts and profiles.
Some users could obtain privileges in the forum, and then they are called moderators.
These privileges may include the ability to modify messages outside, move or delete discussions, and other mechanisms designed to keep the cordial and friendly (regulations designated by the administrator).
One of the main features of Forums is that we do not know many facts
on the "forum user" (a person who participates in it). Generally, simply enroll providing a nickname (alias, nickname, username) and an e-mail, which, inclusive, may have been created for the sole purpose of participating in a forum and keep total anonymity.
Now we define quite precisely the forum, we are able to understand the
assertion of the title of this section. That is, the passage of the Community Forum Manager Manager.
As a first step, the administrator is circumscribed to this area, while community members, together or separately, may be interacting in other Spanish services without the administrator knows or somehow part. From this point of view, the claim that is made is that the forum can be part of the community, but not the community.
The forums have been very useful before the advent of social networks, and we highlight the advantage they had in terms of management participation and the possibilities of centralization on particular topics. But is it correct to speak in past Forums? Probably the reader to participate in one or more and is in total disagreement with the verb tense used, but it is undeniable that participation has declined markedly, and at the time of creating communities, forums should not be the first alternative that comes to mind to a Community Manager.
Today, we can create our own social network, using free tools
are offered on the Web or other more professional, while not free, are priced
more accessible, such as Ning.
The big difference, then, between a manager and discussion forums Community
Manager is that the former is responsible for that area only, while the CM can be an administrator manage parallel forums and all other areas where you think you are your community ..

Sunday, April 7, 2013

HA, DRS, VMotion and Storage VMotion

We will analyze the most powerful tools VMware infrastructure d.

VMotion
VMotion is an essential tool in the virtual infrastructure, which basically allows moving a virtual server from one node to another ESXI.
This option is really interesting because, to use, we will have no loss of service connection or drop the move equipment. This is because the file system using the ESXi, called VMFS.

Once you configure your network for VMotion, we do a test migration. Migration can be hot or cold, that is, with the server on or off.

VMotion is essential to create dynamic datacenters and automated. Now you can perform the maintenance of physical hardware if nafectar business continuity in any respect. It is a technology of 2004 or so, meaning that, at the time of writing, is a development of more than 8 years.
To connect the nodes to the storage and use VMotion, you can connect from a SAN (Storage Area Network) fiber, also supports compratibilidad with NAS (Network Attached Storage) and SAN iSCSI storage systems that are economic.
Migration tasks have priorities and can be scheduled to be performed at a certain time of day. Because we could have a problem if you move a virtual server when you have to move for some reason.
Suppose we are performing a configuration task uan NIC in a node, and an administrator puts a virtual server with VMotion destination node that we are fixing. Could lose the service and business continuity. To do this, the console has a view of all the tasks performed in vCenter.

Monday, February 25, 2013

The Robot Architecture new technology
Navigation, as the general task of leading a robot to a target destination, is naturally intermingled
with other low-level tasks such as obstacle avoidance, and high-level tasks
such as landmark identification. We can see each of the tasks, from an engineering point
of view, as a system, that is, systems require and offer services one another. These systems
need to cooperate, since they need one another in order to achieve the overall
task of reaching the target. However, they also compete for controlling the available
actuators of the robot. To exemplify this cooperation and competition, imagine a robot
controlled by three systems, the Pilot system, the Vision system and the Navigation
system. Actually, these three systems compose the architecture we have used to control
our robot, which will be described in detail in the rest of this chapter. Regarding the
cooperation, the Navigation system needs the Vision system to recognize the known
landmarks in a particular area of the environment or to find new ones, and it also needs
the Pilot system to move the robot towards the target location. Regarding the competition,
the Navigation system may need the robot to move towards the target, while
the Pilot system may need to change the robot’s trajectory to safely avoid an obstacle.
Moreover, the Pilot may need the camera to check whether there is any obstacle ahead
and, at the same time, the Navigation system may need to look behind to localize the
robot by recognizing known landmarks. Thus, some coordination mechanism is needed
in order to handle this interaction among the different systems. The mechanism has to
let the systems use the available resources in such a way that the combination of these
interactions results in the robot reaching its destination.
We propose a general architecture for managing this cooperation and competition.
We differentiate two types of systems: executive systems and deliberative systems. Executive
systems have access to the sensors and actuators of the robot. These systems
offer services for using the actuators to the rest of the systems and also provide information
gathered from the sensors. On the other hand, deliberative systems take higherlevel
decisions and require the services offered by the executive systems in order to
carry out the task assigned to the robot. Despite this distinction, the architecture is not
hierarchical, and the coordination is made at a single level involving all the systems.
The services offered by the executive systems are not only available to the deliberative
systems; they are also available to the executive systems themselves. Actually, anexecutive system must compete with the rest of the systems even for the services it is
offering. The systems (no matter their type) can exchange information between them
(be it sensory information or any other information they could have – e.g. map of the
environment).
The coordination is based on a simple mechanism: bidding. Deliberative systems
always bid for the services offered by executive systems, since this is the only way
to have their decisions executed. Executive systems that only offer services do not
bid. However, those executive systems that require services from any executive system
(including themselves) must also bid for them. The systems bid according to the internal
expected utility associated to the provisioning of the services. A coordinator receives
these bids and decides which service each of the executive systems has to engage in.
Although we use the term “bidding”, there is no economic connotation as in an
auction. That is, systems do not have any amount of money to spend on the bids,
nor there is any reward or good given to the winning system. We use it as a way to
represent the urgency of a system for having a service engaged. The bids are in the
range [0 ; 1] , with high bids meaning that the system really thinks that the service is the
most appropriate to be engaged at that moment, and with low bids meaning that it has
no urgency in having the service engaged.
This bidding mechanism is a competitive coordination mechanism, since the action
executed by each system is the consequence of a request of one of the systems, not a
combination of several requests for actions made by different systems, as it would be in
a cooperative mechanism.
This modular view forms an extensible architecture. To extend this architecture
with a new capability we would just have to plug in one or more new systems, eventually
adding new sensors or actuators, and eventually changing the bidding functions
of the existing systems. Not only that, it also permits us to recursively have a modular
view of each one of the systems, as will be soon seen in the design of our Navigation
system. Moreover, this architecture is not thought only for navigation purposes since
its generality can be used for any task that could be assigned to a robotic system.
For our specific robot navigation problem, we have instantiated the general architecture
described above . It has two executive systems, the Pilot and Vision
systems, and one deliberative system, the Navigation system. Each system has the following
responsibilities. The Pilot is responsible for all motions of the robot, avoiding
obstacles if necessary. The Vision system is responsible for identifying and tracking
landmarks (including the target landmark). Finally, the Navigation system is responsible
for taking higher-level decisions in order to move the robot to a specified target.
The robot has two actuators: the wheels’ motors, used by the Pilot system, and the camera
motor, used by the Vision system. The available sensors are the wheel encoders
and bumpers, which provide odometric and bumping information to the Pilot, and the
images obtained by the camera, used by the Vision system to identify landmarks. The
Pilot system offers the service of moving the robot in a given direction, and the Vision
system offers the service of moving the camera and identifying the landmarks found
within a given area. The bidding systems are the Pilot and the Navigation system, while
the Vision system does not bid for any service.

Saturday, February 23, 2013

3D audio technologies February 2013

State of the art in 3D audio technologies 2013
In this chapter we present a brief overview of the state of the art in 3D
surround sound. The technologies reviewed here span from complete frame-
works that account for the whole chain from capture to playback, such
as Ambisonics and Wavefield Synthesis, to extensions of existing 2D ap-
proaches, like amplitude panning, to a brief mention of hybrid systems and
solutions that have been recently introduced to the market.
This chapter is not meant to be a complete and detailed description of the
technologies, but just to introduce their most relevant aspects and give the
reader a basic knowledge of the subject, providing a context for the topics
that are mentioned in the rest of the thesis. References to key research
papers and books are provided in each section.
Binaural audio
Binaural audio is perhaps the most straightforward way of dealing with
three-dimensional audio. Since we perceive three-dimensional sound with
our two ears, all the relevant information is contained in two signals; indeed,
our perception is the result of interpreting the pressure that we receive at
the two ear drums, so recording these signals and playing them back at the
ears should suffice for recreating life-like aural experiences.
Our perception of the direction of sound is based on specific cues, mostly
related to signal differences or similarities between the ears, that our brain
interprets and decodes. In the end of the nineteenth century, Lord Rayleigh
identified two mechanisms for the localization of sound: time cues (which
are also interpreted as phase differences) are used to determine the direc-
tion of arrival at frequencies below 700 Hz, while intensity cues (related to
signal energy) are dominant above 1.5 kHz. In the low
frequency region of the audible spectrum, the wavelength of sound is large
compared to the size of the head, therefore sound travels almost unaffected
and reaches both ears regardless of the direction of arrival. Besides, unless
a sound source is located very close to one ear, the small distance between
ears does not cause any significant attenuation of sound pressure due to the
decay with distance.
The basic concept behind 3D binaural audio is that if one measures the
acoustic pressure produced by a sound field in the position of the ears of
a listener, and then reproduces exactly the same signal directly at the ears
of the listener, the original information will be reconstructed. Binaural au-
dio is perfectly linked with our perception, because it takes implicitly into
account the physical mechanisms that take part in our hearing. Binaural
recordings are implemented by means of manikin heads with shaped pinnae
and ear canals, with two pressure microphones inserted at the end of the
ear canal, thus collecting the signals that a human would perceive. Exper-
iments have been done with miniature microphones inserted into the ear
canals of a subject, to obtain recordings that are perfectly tailored to a per-
son’s shape of the outer ear . Binaural playback requires
using headphones to deliver each ear the corresponding recorded signal, and
the technique delivers good spatial impression. It is worth mentioning that
while listening to conventional mono or stereo material through headphones
conveys a soundstage located within the head, the use of binaural technique
accurately reproduces sounds outside the head, a property which is called
“externalization”.
Physically, the signals that reach the ear drums when a sound source
emits a sound from a certain position can be expressed as the convolution
between the sound emitted by the source and the transfer function between
the position of the source and each ear (neglecting effects of the room).
The head related transfer functions (HRTF) depend on the position of the
source, the distance from the listener and the peculiar shape of the outer
ear that is used during recording. Various HRTF databases are available
which offer the impulse response recordings done with the source sampling
a sphere at a fixed distance (far field approximations are used and distance
is usually neglected). With such functions, binaural material can also be
synthesized by convolution: once a source and its position are chosen, the
left and right binaural signals are obtained by convolving the source with
the left and right HRTF corresponding to the position of the source. In this
way, realistic virtual reality scenarios can be reproduced. In the real time
playback of synthetic sound fields, the adoption of head tracking to detect
the orientation of the listener and adapt the sound scene accordingly has
been proven invaluable for solving the localization uncertainty related to the
circles of confusion or the front-back ambiguity.
In the past year 2013, the evolution of technology has leapt unexpectedly

The technology boom of the new generation results in terms of
"Cell phone and Internet" (Smartphone, Tablet, iPhone, etc.) which, rather than being separate realities complement.
However, the development of these types of technology comes a point
wherein converge, and when the network is a global communications
opens and exceeds the expectations of its creators, Internet no longer
exclusively for the military and government, and combined with the services
telephony becomes a social interaction media currently
is present in all areas of daily life.
Today, these technologies are combined in a single, the cellular and
not limited to the function of two people communicate with each other, but now
have evolved to include modalities such as Internet access in almost
all its aspects (data, mp3, teleconference, transmission
photo files and videos, etc..).
This brings countless advantages, accelerate the pace of obtaining
information, facilitates communication, reduces emissions and response times;
ie transforms everyday life into an event technology,
all this tied to economic growth of societies, and beyond
all changes in the natural order of things that the technology generates.
Having seen so many wide and constant changes
that mobile telephony and the Internet have been issued on the global community,
arises in my interest in further informatin on the issues that shape this revolution in our own environment.
To meet the above, in this paper we proposed the development of a software application with mobile computing platform, providing access to information located on a platform database on a Web server, through mobile devices such as cell phones.
The application provides for the registration and monitoring of proprietary information
a pharmaceutical entity, ie the relevant information of the client,
purchasing products and prescription medicines.
This means customers the ability to self manage their comprasen anytime and anywhere without having to physically attend the pharmacy branches and with only the help of a modern cell phone.
Proposed turn developing a web application platform make it accessible on the Intranet, by providing additional features like be: user registration, stock control and drug products counter.
It also proposed the development of a Web site accessible from the Internet,
that appears to be the site of a pharmacy, which contemplates functions
e-commerce such as: customer registration, sale of products and / or drugs
on-line, allowing a customer to make your purchase so virtual.
Differences OLTP vs Data Warehouse in February 2013

Traditional systems of transactions and data warehousing applications are polar opposites in terms of their design requirements and operating characteristics.
OLTP applications are organized to execute transactions for which they were made, for example, move money between accounts, a charge or credit, a return of inventory, etc.. Furthermore, a data warehouse is organized based on concepts such as: customer invoice, products, etc.
Another difference lies in the number of users. Normally, the number of users of a data warehouse is less than one OLTP. It is common to find that transactional systems are accessed by hundreds of users simultaneously, while only tens Data Warehouse. OLTP systems perform hundreds of transactions per second while a single query of a Data Warehouse can take minutes. Another factor is that transactional systems frequently are smaller in size to the data warehouses, this is because a data warehouse information can consist of several OLTP's.
There are also differences in the design, while an extremely normalized OLTP, of a Data Warehouse tends to be denormalized. The OLTP typically consists of a large number of tables, each with few columns, while in a data warehouse is the lower number of tables, but each of them tends to be greater in number of columns.
The OLTP is continuously updated by operational systems every day, while the Data Warehouse are updated periodically batch.
OLTP structures are very stable, rarely change, while those of Data Warehouses derivatives are constantly changing their evolution. This is because the types of queries to which they are subject are varied and it is impossible to foresee all in advance.
Improving Information Delivery: complete, correct, consistent, timely and accessible. Information that people need, in the time you need it and in the format you need.
Improve Decision Making Process: With more support information are obtained faster decisions, and also the business people acquire greater confidence in their own decisions and those of the rest, and achieved a greater understanding of the impacts of their decisions .
Positive Impact on Business Processes: when people are given access to a better quality of information, the company can achieve on its own:
   · Eliminate delays business processes resulting from incorrect, inconsistent and / or nonexistent.
   · Integrate and optimize business processes through sharing and integrated information sources.
   · Eliminate the production and processing of data that is not used or required as a result of poorly designed applications or no longer used.

Improving productivity and efficiency in February 2013

Improving productivity and efficiency through a multistage implementation
Financial services firms can take an existing inefficient infrastructure for
risk management and compliance and gradually grow it into an integrated,
highly efficient grid system.
As shown, an existing infrastructure may comprise stove
pipes of legacy applications disparate islands of applications, tools
and compute and storage resources with little to no communication among
them. A firm can start by enabling one application a simulation application
for credit risk modeling, for example to run faster by using grid
middleware to virtualize the compute and storage resources supporting
that application.
The firm can extend the same solution to another application, for example,
a simulation application used to model market risk. Compute and storage
resources for both simulation applications are virtualized by
extending the layer of grid middleware; thus both applications can
share processing power, networked storage and centralized scheduling.
Resiliency is achieved at the application level through failover built into the
DataSynapse GridServer. If failure occurs or the need to prioritize particular
analyses arises, one application can pull unutilized resources that are
supporting the other application. This process also facilitates communication
and collaboration across functional areas and applications to provide
a better view of enterprise risk exposure.
Alternatively, a firm can modernize by grid-enabling a particular decision
engine. A decision engine, such as one developed with Fair Isaac’s tools,
can deliver the agility of business rules and the power of predictive analytic
models while leveraging the power of the grid to execute decisions
in record time. This approach guarantees that only the computeintensive
components are gridenabled while simultaneously migrating
these components to technology specifically designed for decision
components.
Over time, all applications can become completely grid-enabled or
can share a common set of gridenabled decision engines. All compute
and data resources become one large resource pool for all the applications,
increasing the average utilization rate of compute resources
from 2 to 50 percent in a heterogeneous architecture to over 90 percent
in a grid architecture .
Based on priorities and rules,DataSynapse GridServer automatically
matches application requests with available resources in the distributed
infrastructure. This real-time brokering of requests with available
resources enables applications to be immediately serviced, driving greater
throughput. Application workloads can be serviced in task units of milliseconds,
thus allowing applications with run times in seconds to execute
in a mere fraction of a second. This run-time reduction is crucial as banks
move from online to real-time processing, which is required for functions such as credit decisions made
at the point of trade execution. Additionally, the run time of applications
that require hours to process, such as end-of-day process and loss
reports on a credit portfolio, can be reduced to minutes by leveraging this
throughput and resource allocation strategy.
The workhorses of the IBM grid infrastructure in february 2013

The workhorses of the IBM grid infrastructureare the grid engines
desktop PCs, workstations or servers
that run the UNIX, Microsoft
Windows or Linux operating systems
. These compute
resources execute various jobs submitted
to the grid, and have access
to a shared set of storage devices.
The IBM Grid Offering for Risk
Management and Compliance
relies on grid middleware from
DataSynapse to create distributed
sets of virtualized resources.
The production-proven, awardwinning
DataSynapse GridServer
application infrastructure platform
extends applications in real time to
operate in a distributed computing
environment across a virtual pool of
underutilized compute resources.
GridServer application interface modules
allow risk management and
compliance applications and nextgeneration
development of risk management
and compliance application
platforms to be grid-enabled.
IBM DB2 Information Integrator
enables companies to have integrated,
real-time access to structured
and unstructured information across
and beyond the enterprise. Critical to
the grid infrastructure, the software
accelerates risk and compliance analytics
applications that process massive
amounts of data for making
better informed decisions. DB2
Information Integrator provides
transparent access to any data
source, regardless of its location,
type or platform.
Real world, real successes in the february 2013

IBM is the industry-leading supplier of
grid solutions, services and expertise
to the scientific and technical communities,
as well as to the financial
services sector. Leveraging its considerable
experience in implementing
commercial grids worldwide, IBM has
created targeted grid offerings customized
to meet the unique grid computing
needs of the financial services
industry. IBM Grid Computing is currently
engaged with more than 20
major financial institutions in North
America, Europe and Japan, and
more than 100 organizations
worldwide.
Wachovia worked with IBM and
DataSynapse to enhance the processing
speed of trading analytics in
the financial services company’s fixed
income derivatives group. Before
implementing a grid solution, profit
and loss reports and risk reports took
as long as 15 hours to run; now, grid
solution in place, Wachovia can turn
around mission-critical reports in minutes
on a real-time, intraday basis.
Moreover, trading volume increased
by 400 percent, and the number of
simulations by 2,500 percent. As a
result, the group can book larger,
more exotic and more lucrative trades
with more accurate risk taking.
The importance of interaction analysis in CSCL  in 2013

The importance of interaction analysis in CSCL
We know that these collaborative learning environments are characterized by a high degree of interaction with the system user, thereby generating a lot of action events. The action event management is a key issue in applications, since, on the one hand, the analysis of data obtained recogidosu real life online, collaborative learning situations also help important issues in the functioning of the group and collaborative learning process should be further understood that this can guide both the design workspace more functional and software components, as well the development of improved facilities such as awareness, feedback, monitoring space work, evaluation and monitoring of the work of the group by a coordinator, tutor, etc.. Indeed, data filtering, proper management of events allows the establishment of a list of parameters that can be used to analyze the group's activities space (eg tutor-to-group or member to member communication flow, asynchronism in the space group, etc.). These parameters allow the efficiency of the group's activities for better performance and group and individual attitudes of its members in the shared workspace that was predicted.
Furthermore, application design will be necessary for this purpose organize and manage both the resources offered by the system and the users accessing these resources. All this user-and resource-user interaction user generates events or "logs" to be found in the log files and represent the information base for conducting statistical process aimed at obtaining knowledge of the system. This will facilitate collaborative learning process by keeping users abreast of what is happening in the system (for example, the contributions of others, documents created, etc.) and control user behavior in order to provide support (eg, help students who are unable to perform a task on your own). Therefore, the user-user and user interaction is critical resource in any collaborative learning environment to enable groups of students to communicate with each other and achieve common goals (eg, a classroom activity in collaboration).
Although user interaction is the most important point to be managed in applications, it is usually also important to be able to monitor and control the performance and overall system performance. This allows the administrator to continuously monitor critical parts of the system and act as necessary. Moreover, it adds a layer of security implied what already exists (for example, user habits of control "to detect fraudulent use of the system unauthorized users).
To effectively communicate the knowledge gained from the activity of the users group in terms of knowledge and feedback, CSCL applications should provide full support to the above three aspects are essential in all applications collaboration, namely, coordination, communication and collaboration in order to create virtual environments where students, teachers, tutors, etc. are able to cooperate with each other to achieve a common goal of learning. Coordination involves the organization of the group in order to achieve the objectives set and monitor user activity, which is possible by maintaining awareness of the participants.
The communication relates to the communication medium basically messages between users within and between groups and may be in both synchronous and asynchronous modes. Finally, the partnership allows members of the group share all kinds of resources, which is also found in synchronous and asynchronous modes. Both coordination and collaboration and communication will generate many events that will be communicated to users after these facts have been manipulated and analyzed in order to provide users with as much awareness as possible immediate and constant flow as possible feedback.
Humble Opinions Again in february 2013

We have seen the state of the art, there two possible directions
That research in coreference resolution Should follow are the use of more expres-
sive than mention-pairs models to manage the problem, Such as entity-mention,
and the incorporation of new information, Such as world knowledge and discourse coherence. In some cases, This Information can not be Expressed in terms of pairs of mentions. That is, it is information That Involves several mentions Either partial or entities at once. THEREFORE, an experimental approach In This Requires the expressiveness of the combined entity-mention model With The MOST typical features of the mention-pair model.
we dened an approach based on constraint satisfaction That
Represented the problem in a hypergraph and solved it by relaxation labeling,
Reducing coreference resolution to a hypergraph partitioning problem under a set of constraints. Our approach managed mention-pair and entity-mention models at the same time, and was Able to enter new information by adding as many
Necessary as constraints. Furthermore, our approach overcame the Weaknesses of previous Approaches in state-of-the-art systems, Such as linking Contradictions, classications without context, and A Lack of information in Evaluating pairs.
The system Developed, RelaxCor,'ve Achieved state-of-the-art results using only the mention-pair model without new knowledge. Moreover, Experiments with the entity-mention model Showed how the system Is Able to enter knowledge in a constructive way.
In Addition, as Explained in Section, We Have Proposed a method based on the clustering of all positive examples in Which examples are included, while the negative examples similar to the positive MOST ones are kept and the rest are discarded. This method you reduce the number of negative examples positive without losing any information.
Regarding the feature selection function, many works just manually select the MOST informative feature functions and discard the noisy ones. Few Researchers have incorporated an automatic feature selection function process.
We have made a small contribution in This Area by Selecting feature functions through to Hill Climbing process.
The other Contributions include techniques for performance
Such as balance optimization, pruning, and reordering. The balance parameter used was the optimal point tond Between Precision and recall, the pruning process while the computational cost and Reduced Avoids the system performance being dependent on the size of the documents. Were Both techniques included in the development process Facilitated That the optimization of the system for a target measure. The reordering process performance improved by Reducing the number of possible labels Assigned To The Most informative mentions, Which Caused The Most Reliable coreferential relations to be resolved rst.
Experiments to add world knowledge in order Were performed to Improve the coreference resolution performance. Although These experiments did not last Achieve a signicant improvement, the reason Seems to be more related to the type and source of information and its extraction than the approach used
to Incorporate it.