Medical ultrasound is a medical diagnostic technique based on the application of ultrasound to get views of the internal body structures or organs. Among the advantages there are its low cost, non-use of ionising radiation, and realtime imaging. The learning process implies the development os capabilities to interpret the images that correspond to 2D scans of the body inside. Being based on the emission of mechanical waves and detection of their reflections, these waves are modified by the elements that they encounter along the travel paths. In this process it is normal that artefacts may result from reflections or other sources but that are discarded by the experienced physician through a careful choice of the probe scan movements and varying its orientation. In this project we intend to create a system that includes an haptic device (phantom or …) that will be used to simulate the positioning of the sonograph probe, an HMD that will be used to visualise the both the (virtual) patient and the sonograph display.
- You should develop your work in a continuous way and I will not accept people that work only during the last month.
- You must demonstrate the evolution of your work every one or two weeks.
- You must work on the lab and integrated on the team and not alone at home (not somewhere else)
- You are close to be a professional so you must behave as one.
- Students are accepted on a First Come First Served basis and take into account the student obtained grades. So if you come too late, probably the subject is already taken.
Note: that this is a dynamic list and it may be updated frequently especially just before the beginning of the semesters.
Smart interactive houses have since long been the subject of several conversations, commercial advertisings. In fact there are several manufacturers that have developed proprietary protocols for controlling devices but as their offer is either limited in the variety of supported devices or they are too expensive we have not seen any massive adherence to their use. This is one side of the story, the other side comes from the type of usage that these systems permit to use, and their more os less complicated and limited interfaces. We can say that frequently these systems enable one to do everything that he or she does not need. More recently automatic profiles have been supported by some of these systems that enable a given preset configuration to be selected at particular times, or when a user arrives or leaves home. The aim of this project is to develop an extensible platform that will perform the integration of signals from a network of sensors, such as cameras, PIR, temperature, noise, among others. This will create the basis for the analysis of behaviours of people, in particular for aged people to detect abnormal and risky situations, such as falls, strokes, excessive immobility, etc...
Manipulation of microscopic objects is gaining relevance as the techniques evolve and bring the possibility of dealing of smaller and smaller objects. For example in biology the possibility of manipulating individual living cells, embryos and stem cell and generic therapies is of outmost importance. For example performing injection in cells is a task that requires extensive practice, in particular as it is performed at a level where the magnitude of the involved forces are very hard to measure, and only a 2D-like visualisation of the task is possible through a microscope. Given this, it is quite normal that in many cases the cells explode due to imprecise manipulation that in turn comes out from the lack of reliable feedback that may bring the operator to sense it properly. In this work we intend to develop a system for manipulating cells or other microscopic objects, based on the generation of haptic forces extracted from visual cues extracted from the manipulated objects’ deformations.
Diabetes Mellitus is a chronic disease that is increasingly prevalent in our society reaching both sexes and all ages. It is a disease characterized by increased levels of sugar (glucose) in the blood, hyperglycemia. Globally, in 2015, the estimated prevalence of diabetes was 415 million people with the disease estimated to reach 642 million by 2040. In 2015 the estimated prevalence of Diabetes in the Portuguese population aged 20-79 (7.7 million individuals) was 13.3%, that is, more than 1 million Portuguese in this age group have Diabetes. More than a quarter of people between the ages of 60-79 have Diabetes. The persistence of a high level of blood glucose, which is a characteristic of Diabetes, even when no Diabetes symptoms are present to alert the individual to it or to their decompensation, results in tissue damage. In virtually all developed countries, diabetes is the leading cause of blindness, kidney failure, and lower limb amputation. (Source: Diabetes factos e números 2016) This project is focused on the development of proposals to help people that need to administer themselves Insulin to use the right doses. As loosing vision capabilities one of the consequences of the disease, people may tend to reach blindness levels such that they cannot see the selected number of doses in insulin pens. The same is true for totally blind people. Although the patients may count the number of clicks, this may not be easy when the required doses are high so is the number of clicks to count. The work to develop consists in the study of possible solutions for augmenting the security of self-administration of insulin by modifying the insulin pens or create devices that will be used in conjunction with them. These devices must be Internet connectable to enable the administration history log to be consulted by doctors or caregivers. And, should have voice feedback to help the users to know what dose is selected prior to the administration.
In recent years, progressive exposure therapy has proven to become very effective in the treatment of psychological disorder. Following this principle and due to the danger that sometimes in vivo or in loco exposure brings to the patient, it is important to develop a Virtual or Augmented Reality framework that allows that same experiences. This proposal focus on the manipulation of perceived self, i.e. modifying the perception that the users have of their own body. When applied either to Virtual or Augmented Reality systems, the users must be able to look at themselves (looking down or through a mirror) and see a different body, and feel it as theirs body. The idea is to exploit the “rubber hand illusion” and extend it to make the users believe and feel that they own a new body, not just a different hand. The modified body perception will enable the development of serious games for the therapy of specific disorders such as phobic disorders.
It is known the effects of physical therapy for post-stroke patients suffering from hemiplegia. In these cases the rehabilitation protocols are based on the repetition of task-specific exercises will follow the principles of neural plasticity and motor learning. As in many other therapies the principle is that the more the exercises are repeated, the better and faster is the recovery. Taking advantage of high-bandwidth internet connections, cheaper and faster computers, and decreasing cost of head mounted displays (HMD), it becomes possible to have the patients perform the exercises at home. This proposal consists in developing a mixed reality system for supporting the execution of therapeutic exercises and evaluate the patient performance and evolution. Using the internet connection, the application may enable the remote analysis of the results by a specialist that may decide to change the therapy parameters. This would have the benefits of lesser visits to the therapist, what may have a large economical impact in the patient’s life, especially for those that live far from the cities.
The project goal is to develop and implement a real-time incremental 3D body reconstruction system that combines visual features and shape-based alignment using a Kinect(tm) depth sensor and video cameras s. Methodologies introduced previously are expected to be implemented targeting real-time performances by exploring the implementation of incremental reconstruction algorithms in a GPU to obtain 3D mesh models in realtime. Description of work 3D modelling consists mainly on four main phases: scan object surface from different views, register the views and integrate these views. The work will focus on last two phases aiming to render the reconstructed body model in mixed reality space.
This project aims at developing a 3D teleconferencing system, that virtually puts two persons in front of each other, in spite of being in different physical spaces. For this we will explore the use of two Kinect devices to capture not only the scene image but also its depth information from two different viewpoints. This information should be transmitted to a remote place that will show the 3D scene on a TV screen (eventually 3D enabled) from an observer controlled viewpoint. Being the idea to develop a face-to-face communication system, the viewpoint changes should be equivalent to moving the observer’s head in front of the viewed user, as happens in a physical meeting. By consequence the viewpoint movements allowed are confined to those that keep the observed in his/her (virtual) half-space. This will simplify the problem as no back views need to be generated. To enable the user to move freely in front of the display and have the 3D perception of the remote user and scene, a tracker must be used to continuously estimate the viewing point.