Michal's personal timeline, a place to collect and share things from Michal's life.
Created by mgaldzic on Sep 19, 2008
Last updated: 08/19/10 at 11:45 AM
Michal G. has no followers yet. Be the first one to follow.
the activity of developing software is unlike anything else we humans do. And since it’s a relatively new activity, we tend to use other activities as metaphors to better understand what it’s like. All of these metaphors predictably break at some point
A DARPA-funded startup hopes that a new kind of processor it has developed, based on analog probabilities instead of the digital sureties of binary, will revolutionize the world of computing.
Traditionally, there have been 3 ways to reimburse physicians for services rendered: salary, capitation, or fee for service. A physician who receives a salary is paid a certain amount per month or year of work. Physicians reimbursed through capitation are paid based on the number of patients they see or the number of patients for whom they are responsible. Physicians reimbursed on a fee-for-service basis are paid for every service they provide, regardless how simple or complex.
The way physicians are paid affects, even if subconsciously, what physicians do. If salaried physicians believe that spending hours on a well-child or routine adult health maintenance visit is clinically desirable and there are no administrative controls, physicians are likely to increase time spent with patients, limit the number of patients they see, and not be concerned about throughput. Physicians paid by capitation are motivated to include as . . .
There was a time when only scientists used computers. Now systems that are thousands of times more powerful are available to nearly everyone.
An analysis of over 50,000 Science papers suggests that it could pay to include more references.
Greentech Media (blog)Biobutanol Producer Gevo Registers For $150M IPOGreentech Media (blog)Our strategy is to commercialize biobased alternatives to petroleum-based products using a combination of synthetic biology and chemical technology. ...and more »
& Deeper by Jorge
"The Repulsor Field Explained" - originally published
For the latest news in PHD Comics, CLICK HERE!
A growing population of digital organisms, started from one individual, colonizes an initially empty space (black). As it does so, the population evolves by random mutation and selection, based on each organism's resulting phenotype. Every colored square represents one digital organism, with different colors reflecting different fitness levels. The speed, control, and ease of data collection in the Avida digital evolution platform permits experiments that would be difficult or even impossible with natural organisms, such as the present study (Clune et al., doi:10.1371/journal.pcbi.1000187) on the evolution of mutation rates. (Kaben Nanlohy, Michigan State University)
Michigan State University (MSU) researchers have developed “digital organisms” called Avidians that were made to evolve memory, and could eventually be used to generate intelligent artificial life and evolve into symmetrical, organized artificial brains that share structural properties with real brains.
MSU researcher Jeff Clune works with a system called HyperNEAT, which uses principles of developmental biology to grow a large number of digital neurons from a small number of instructions.He translated the artificial neurons into code that could control a Roomba robot.
You can build complex brains from a relatively small number of computerized instructions, or “genes,” he says. Their brains have millions of connections, yet still perform a task well, and that number could be pushed higher yet. “This is a sea change for the field. Being able to evolve functional brains at this scale allows us to begin pushing the capabilities of artificial neural networks up, and opens up a path to evolving artificial brains that rival their natural counterparts.”
A team of students is using bioinformatics to implement federal guidance on synthetic genomics. The students' work will help gene synthesis companies and their customers better detect the possible use of manufactured DNA as harmful agents for bioterrorism.
Shared by mgaldzic
worth a read
In a SPIEGEL interview, genetic scientist Craig Venter discusses the 10 years he spent sequencing the human genome, why we have learned so little from it a decade on and the potential for mass production of artificial life forms that could be used to produce fuels and other resources.
Photographer Sergey Larenkov uses computational rephotography (as shown above and explained here by Wired) to overlay extant WWII-era photographs on their corresponding modern settings. The results are both spooky and stunning:
The shots really do have to be seen large, so check out Larenkov's LJ page for the rest of 'em.
Measurement of protein and messenger RNA copy numbers in single Escherichia coli cells gives a system-wide view of stochastic gene expression.Authors: Yuichi Taniguchi, Paul J. Choi, Gene-Wei Li, Huiyi Chen, Mohan Babu, Jeremy Hearn, Andrew Emili, X. Sunney Xie
If their owner isn't watching, dogs go into stealth mode to steal food. It is more evidence that they can tell what others are thinking
DNA factory builds up steam
First reliable components for synthetic biology could be available by the end of the year.
The online encyclopedia is exploring ways to embrace the semantic Web.
The DOE funds a research center aimed at making artificial photosynthesis practical.
The DOE funds a research center aimed at making artificial photosynthesis practical.
This receipe really worked well for us ten of us, well we also had chicken drumsticks, mashed potatoes, and skewers with veggies (cherry tomatoes, yellow squash, mushrooms, onions, orange peppers, and para boiled baby red potatoes).
The ribs where the bomb: Here’s how we made them. Apprx 2 racks of ribs (i big pack from Costco)
The Night Before
Prepare Ribs in Dry Rub Marinade
Mix these spices in bowl
1.5 cups white sugar
1/4 cups salt
2.5 tbs black pepper
3 tbs paprika
1tsp chili powder
(option) 2 tbs garlic powder
Rub into Ribs
Place ribs in roasting pan and cover
Refrigerate >8 hours
The Next Day
Pour off at least half the liquid that formed over night
Set oven to 275F or 135 C
Bake uncovered for 3-4 hours, turn once if so inclined
An hour before ribs are ready
Make BBQ sauce by cooking the following in skillet
5 tbs drippings from cooking ribs
1/2 a chopped onion
Cook until browned and tender
Add the following:
4 cups ketchup
3 cups water (i dont remember actually adding this..)
4 tbs brown sugar
1 tsp chilli powder (or “to taste”)
Reduce heat, simmer 1 hour COVERED! (splatter)
Achieve “good thickness” == “goodness”
(optional) Finish here, baste with sauce
Baste Ribs in sauce
Put the vegetable skewers on 1st
(optional) add soaked wood chips to smoker box of gas grill (need to get one of thems)
Grill for 20 min, Baste and turn occasionally
Don’t burn them
Last night the Seattle DIY biologists met for a second time, this time there were seventeen of us including myself, a first timer. The group trickled into Dan Heidel’s house in Phinney at around 8pm. Dan began by telling us about his project to set up lab space in a commercial space he rented in south Seattle which he is making open for use for serious projects that any of us would like to undertake. He has been buying up equipment and it is really starting to come together, but he feels that if someone did start a project now chances are at least one piece of necessary equipment would be missing. Not to mention consumables which he has none of. Needless to say his effort, called Seattle Open Bio Labs, LLC, is an amazing step forward in organizing and a very generous gesture to share it with the group. Dan has been buying equipment to create this environment where he hopes to create a community of Open Science projects. He hopes to encourage all to keep their lab notebooks online and open as transparency and open collaboration is a powerful driving force behind innovation. Of course if someone has a commercial goal in mind Dan is willing to listen about specific cases. The question of where to find funding for running this operation and supporting projects is a unanswered question, and for the time being individuals would have to fund themselves. The discussion of Dan’s lab answers the main question from Sandra Porter’s post (see 1st meeting) about where to practice DIYBio. (Sandra was unable to make this meeting). Still there remain some todo’s before starting, such as contacting the fire department for advice on any permits and addressing concerns of the wary (Dan does have Liability Insurance for the lab space).
After a significant amount of time we moved on. Dan does talk a lot, but not in a bad way. We discussed project from various other members of the group.
Lifesuit to help paralyzed people walk (see http://theyshallwalk.org/ for more)
Growing neurons to generate solenoids, using growth factors and maybe brewing GFP beer!
Produce our own enzymes to share the stock with the international DIY Bio community
in vivo DNA synthesis, using light as input to dictate to the cell the sequence of DNA. This would allow you to change the code of the cell while it was running.
Setting up DIY Computational Biology (smallest amount of hurdles on this one) BeBoBio
Making fuels from microorganisms
Geothermal cycling for speeding up growth
Self stable ecological systems.
Bacteria that can metabolize polyethylene; Dan said Pseudomonas Originalis which smell like grapes and plague burn victims may be capable
Bio Weather Maps
Randy Hall, Kris Ganjam, Monty Reed, Dan Heidel, James Yang, Alec Nielsen, Tyler Casey, Matt Crowley, Michal Galdzicki, Scott Mason, Tracy Tucker, Max Berry, Ron Shevuah (4 names missing)
[I am missing some people and project please contact me if you were there but i didnt jot down your name, sorry. Some peoples names i got from the RSVPs so please if you weren’t there and i listed you above I appologize. ask me to correct it]
A delicious chicken chili for the slow cooker, made with chicken, beans, and other ingredients.
1 lb chicken, cut up into small chunks (I like to use boneless breasts for their “ease” and lower fat content)
1 cup chopped onion
1 can (or the equivalent) chicken broth
2 cloves of garlic, chopped finely
2 tsp Cumin seed (ground will not withstand long cooking as well)
1/2 tsp dried oregano leaves
3 -15oz cans white beans (great northern or cannellini), drained and rinsed
1 or 2 chopped red, green or yellow bell peppers, or combination
jalapeno chili peppers, fresh, jarred or canned, optional or ‘to taste’ (depending on how much heat you like!)
In a 4 or 6 quart crockery cooker combine the chicken, onions, chicken broth, garlic, cumin and oregano.
Let cook awhile on low (approx. 3-5 hours, depending on your schedule) Add drained beans.
Now here is the important part if you don’t want mushy chili.. Add the bell peppers and jalapeno peppers (if using) no earlier than the last hour or hour and a half before serving.
Top each serving with shredded Monterey jack cheese and or broken tortilla chips if desired. Chicken Chili shared by boathoff from http://southernfood.about.com/od/crockpotchicken/r/bl118c19.htm
Using ontologies such as MGED ontology (MO), NCI Thesaurus to annotate microarray data from GEO. They mapped MO and NCIt. The BCM-CO prototype contained 1200 and 5500 synonym terms, these were used to find GEO descriptions of breast cancer single channel arrays. They discussed the NCIt with regards to whether or not the terms found in the descriptions were found or not in the thesaurus. Especially the compositional terms were hard to find. The indexed data were used to rerieve potential analysis sets.
The NDAR repository for data from NIH funded studies of autism. The features such as age, verbal and non-verbal IQ, ADOS and ADI-R scores. They surveyed literature of autism and extracted terms and relationships to build ontology. They built an ontology of autism phenotypes and define the phenotype in PATO terms and BIRNlex information is extended. SWRL rules were used to define autims phenotypes in terms of data for data analysis. This coding was written to conclude a subjects phenotype by assessing whether the data code contained the code. Then they query the data set with automated inference of phenotype abstractions. They conclude that you can do data inference, not just annotation. They state that the information model is part of the ontology, and claim generalizability to broader clinical data.
Detwiler: Regular Paths in SparQL: Querying the NCI Thesaurus Native OWL representations are obscured. The NCIt browser simplifies the view of the ontology graph. The underlying OWL representation is far more complicated: OWL definitions which link properties to the have an intermediate “restriction”. The Gleen extensions allow to define regular expressions, as an extension implemented as a plugin.
Sharp (presented by Olivier Bodenreider): A Framework for Characterizing Drug Information Sources
Drug information sources are varied and none are comprehensive. The drugs have information about the such as Pharmacy, Chemistry, Biology, Clinical medicine (here excluded pricing/ packaging). These 4 domains represent the groupings by which information is organized for evaluation (to check whether that resource covers that area), this data was PCA’d and the domains cluster within the 1st two PCs. DailyMed, WHO-ATC, UMLS, DrugBank found to be the best.
Cook: Bridging Biological Ontologies and Biosimulation: The Ontology of Physics for Biology
Biosimulation semnatics. 1st biosimulations models are hand crafted, the code is formal, but the meaning of those models is not. The SemSim ontology maps the simulation code to refrence ontolgies. A semantic map between the computational model and the physical model. The physical properties and physical dependecies are the key to the OPB structure defined by the four types of properties: flow, displacement, force, and momentum. The dependencies are the relationships between these kinds of properties. The OPB can serve to map the physical relationships within the mathematical representations of biological processes.
Piccolo: Somatic Mutation Signatures of Cancer (3rd prize for student paper)
Classification of cancer motivates this work. Cancer types as in location and histology. Aim to differentiate between cancer types. Applied the Vogelstein model as a guide. Catalog of Somatic Mutations in Cancer (COSMIC), used as source of studies which typically analyze single or few genes at a time. They picked the mutations which have a proportionately high contribution to cancer. They performed a type of machine learning (?) and then clustering on the represenatative vectors, using manhatan distance and hierachical clustering. The somatic mutation differences between colorectal adenoma and carcinoma are similar, according to their somatic mutation molecular profile. However, breast cancer is distant.
Reported on the use of BioPortal 2.0 as a community based access point to ontologies and actually link to the data they are ment to describe. The community tools include, browsing; including visualization (from P. Storey, Victoria, B.C.), and comments, linking (resources), voting (to reach consensus), the inclusion criteria are loose (related to biomedical research). Also provides access to mappings between ontologies. The projects and notes pages provides a place for testimonials of use and challenges, questions, and critisisim. The ontology metadata provides descriptions of the ontology itself. API and documentation are also available. Mapping are manual at the moment, but Musen suggested that prompt mapping uploads should be possible.
Shankar (and team) -
Talk + Demo of TrialWiz. from ITN at Stanford. Multiple applications integrated using an knowledge management (OWL, OWL/SWRL) and data management (DB) framework. SWRL used for mapping. Epoch ontologies.
BIRN no show?
Dan Masys covered both Clinical and Bioinformatics
CDSS for Providers
Warfarin, asprin studies showed improved adherence to guidelines
CDSS for Patients
Benzodiazepine: tailored intervention gets patients to stop meds (easier to stop then to start patients on meds, hahah)
Hazardous drinking in college: online information intervention had impact to lower drinking, was sustained >12mo
Automated test result reporting: knowledge is power (increased overall satisfaction)
Quality of life and Needs for care: structured patient-provider dialogue around patients world view, helped relate patients values..
CDSS - No diff reported
Matheny - med lab monitor
Hicks - BP management
Thomas - registry - audit
Harari - health risk appraisal
Hansagi - ER info -> primary care provider
Grant - PHR links EMR for type-2 diabetes; previsit use of PHRs had impact on med change (highly motivated cohort)
10 new RCTs; 3 for hypertension
Green, BB; Santamore, WP; Madsen, LB - all on BP (secure website w/+w/o pharmacy ) -> telemedicine equivalent to in person monitoring for chronic conditions; adds to already known findings
Meyer, BC in Lancet Neur. - 2-way audio viedeo v. telephone for thrombolytics; more accurate stroke decision making; useful way to do things for specialists.
Shea S - diabetes care by telemedicine for low income populations; BP, LDL, HBa1c all got better; telemedicine effective but no cost analysis (comments included cost benefit to family not just provider)
Ellison - post-operative robotic rounds; Morbitity and length of stay was equivalent; patient satisfaction was also equivalent; implies: doctor bedside = robot bedside
Telemedicine No diff
Dansky - heart failure
Leimig R - transplants
Practice of Informatics
Love - mining EMRs for CTSAs
Bareznicki - data mine community pharmacy med records to improve asthma managment
Meystre SM - automated problem list w/ improved sensitivity; What is the utility of the problem list? This was an intervention to change clinical practice but pilot has challnages scaling up.
Bioinformatics/ Computational Biology
Cooper (from UW) - Warfarin dosing; aim to add more common SNPs to the current two genes (VKORC1, CYP2C9) via GWAS. What genotypes benefit from which therapy dosage? Pathway interpretation of SNPs early SysBio approach to functional understanding.
Castellanos-Rubio - Celiac disease; linkage with SERPINE2 2q33, PBY3/PPP6C 9q34; Complementary use of structural and functional information to find them.
Chaussabel - Systemic Lupus: microarray approach which includes functional to boost relevance. System scale appraoch combining quantitative data.
10. Personal Genome Project
9. ONL Strategic Plan (Helth IT)
8. Mass and NV pass laws requiring encryption of personal data devices
7. 1st HITSP standards exchnage (NHIN)
6. AMIA Rockefeller Foundation Global eHealth Connection Conference
5. CMS Medicare Improvements Act - ePrescribing pays more.
4. Explosion of Molecular Data - 2nd, 3rd, 4th Personal Genomes; proteomics 1TB/ experiment; infrastructure strained; we’re behind the power curve
3. FDA Sentinel Intiative
2. NIH Open Access Policy
Mougin: Used an approach using mapping to find semantic errors in the NCI thesaurus. One of the conclusions was that Pellet is slow; however that was disputed by the audience response. The
Denny: Created a new terminology for clinical notes by building it from a training set of notes. Parsed things which look like headers from the structured note.
Fung: RxTerm: Chopped up version of RxNorm: appears to be inteded for data entry of medications, as the original RxNorm is really exhaustive, as it has dosage and route information all pre-coordinated. RxTerm reduces the number of displayed choices as you type the 1st 4 or 5 chars.
Lunch meeting at Clark & Parsia. Kendall Clark and Michael Grove were very nice to have invited me to have lunch with them and to chat about the Semantic Web work that they are doing. I got a tour of the office and discussed my project. Kendall suggested looking at some Mike Smith’s modularity work. Meeting Evren Sirin, Michael Smith, and Markus Stocker was impressive as well.
Turner: Ethnographic study of PHIMS system work-flow performed by nurses in the Kitsap county public health department. The study revealed that the task-flow of using the system was inflexible, it was performed at the end of the day/ case and the case work was discontinuous.
Altman and Butte:
Tag-teamed the topic describing the translational informatics field from their top ten lists of what should be done to some examples of their own studies. Described the drug target mechanism of action using the Warfarin PharmGKB project example and the Diabetes microarray candidate gene Takeaway point was: interface design does not drive adoption of these tools, findings do.
Noy: Collaborative Protege. presented by Musen:
NCI Thesaurus a collaborative project ~20 curators(editors), 1 editor in charge of commits/ tasks/ conflicts.
Collaborative Plugin: Protege 3.x
both Rich and Web clients
editing, annotation, discussion, chat tab, comments
As a implementation of: “Change and Annotation” ontology
Evaluated Pilot: NCI Thesaurus 4 editors->40 changes
contrast to Semantic Wikis: BiomedGT
(used the X,O,check comparison with semantic wikis)
ie WebProtege is more at the knowlege rep level, less about access to broader community
1 question on: Bridge Model? What is it?
Shaw: Generating Application Ontologies
Big ontologies -> view -> small subdomain ontoogy
Subqueries: query over query (like nested SQL queries)
Recursive quesries: Gathering subgraphs: extract portion of ontology, by setting a base case and growing until no more growth can happen.
Skolem functions: Combing data from two ontologies create new entity of combination (AorticBlood + Pressure)-> AorticBloodPressure
Radlex example specific to radiology: visible liver parts
Question: Rules can be used too? Queries are more efficient for big ontologies, and they are complementary expressions. Combining two entities could combine the subtrees creating a nonsensical (cartesian) product, it is up to the user to check. (Is it really, what about inheretence?)
Lee: Comparison of Ontology-based Semantic-Similarity Measures
Frequency v Ontological approaches both against expert
distance matrix from SNOMED-CT disease (internal medicine experts)
Descendant and frequency
Evaluated with experts
metrics do not agree, appropriate distance for use case: no superior method
Comments from Audience: some questions as to whether the two ontologies are comparable as they have different purposes..
Mejino: FMA-RadLex: An Application Ontology of Radiological Anatomy derived from the Foundational Model of Anatomy Reference Ontology
RadLex Radiological Lexicon - broad technology, imaging, anatomy, etc
Focus on Anatomy
Terminologies v Ontologies
in ontologies entities share properties, in terminologies terms and definitions
De novo creation - prune big reference ontologies
transform terminology into an ontology
Lots of single inheritance attacked!!
Making Pancakes from Scratch
Sarah and I made pancakes. One of our favorites.
2 cups flour, 1 3/4 cups milk, 2 eggs, 2 tbs baking powder, 3 tbs maple syrup, 1 tsp vanilla, 1/4 cup oil, 1 tsp salt, 2 cups blue berries. Mix well using a wisk. Get a fryingpan med/hot. (no need for oil its in the mix). Use ladel to pour, flip when bubbles on edges start to dry up (when the bottom is golden). Use the oven to keep the ones which are done warm till serving. Serve with butter and maple syrup.
Getting back to exploring the web was a part time waster, part educational. Since early spring I have been tinkering with several of the components that are a big part of Web 2.0 for me. The creation of this blog can represent the first of such adventures. My original decision to have the blog protected by Secure Socket Layer (SSL) authentication, through the University, was based on my apprehension about the consequences of writing openly on the web. I now believe many people first have this reaction, “What if..[insert negative scenario]?” and “Why would anyone care what I write?” I also had the question, “Am I behind the times?” and “Is it a waste of time?” All, but the last question do not need answers. Not, really. Instead the adventure of unfolded. Besides blogging using WordPress, I use Google Reader, Twitter and Ubiquity (Firefox Plugin). I also joined two social networks LinkedIn and Facebook. These are my first tools for engaging with the new web, and these are their stories.
Writing in the blog takes dedication. To create content I have to be very involved in the subject at the time of the writing, otherwise it would be real work. The blog is supposed to act like a journal, but other people could interact with it, by reading and commenting. Right now only University of Washington affiliated people can, see it or comment on it. In reality to do so they have tologin, then they have to login to the blog, to login to the blog and the UW NetID at the same time, I had to add their names to the .htaccess files. This is the old web experience. On Web 1.0 typically each website was created and operated by an administrator, he or she maintained the code, the users, and the access. The maintenance of the site included the server, the application, and any other layer services likemysql, or sometimes even unix environments. Wow! Using the web can be a lot easier now, there are options, very importantly, for free to get various gradients of control. Now, anyone can create the content for a site, usually open to everyone, the security, access, and maintenance are an after-thought. There are still many administrators, but what matters is the content. Most important is the frequently updated content.
Enter Google Reader, the method I chose to consume the ever updated Really Simple Syndication (RSS 2.0) feeds used to disseminate content. Reader, as it is called for short, allows the aggregation of the feeds into a single interface. The act of compiling the information from this widely adopted standard allows for the quick review of the summary or headline of each article and a method oftagging it with a star for reading later. The other important option is the ability to share handpicked articles with a list of individuals. The ability to socialize through sharing information with others is hallmark of Web 2.0. Internet users exchange information using combination of push and pull technologies. The information can be sent from one website to another and can be consumed by the recipient in various ways. The use of RSS feeds has been deployed in the world or publications, especially news and in the aggregate collection of blogs, termed the blogosphere. The things I read about will eventually turn into the things I will write. Directly or indirectly, Google Reader will provide significant content for my writing. A lot of time will be wasted, but I will know when Applesoft has a new iProduct. However personally entertaining reading and writing are, the largest and most “it” Web 2.0 phenomenon is the social network.
At first, I shyly only joined LinkedIn, a site aimed at sharing professional information, as in the resume or CV, with acquaintances. The purpose seems to be mostly a mindless collection of all business contacts with the eventual goal of leveraging the social network for employment. I treat the result as a collection of business cards which will update if the users participate. I am pleasantly surprised as to how many senior and distinguished professors have participated. There are appears to be a rapidly shrinking generation gap which in this context it translates to the possibility of finding an informatics job online after (if) I get a PhD. The social network phenomenon has very little to do with jobs in reality. In the span of the last couple weeks I joined facebook. The decision came after several people asked me whether I was on facebook in regards to keeping in touch. It serves as a way to update friends and family on minutia of daily life, who you are dating, and a large number of widgets used for both self entertainment and virtual interactions with others. The surprise is that enough people care about each others business to make this website a hit. Facebook is so popular many work places are banning its use at the office. There are benefits to using such a tool, especially when maintaining an ambient awareness of your social network while your geographic location changes and people you want to stay close with are in different area codes, “in different area codes”, hmmhm. The interface is an asynchronous form of communication allowing many to many and one to one messaging. The most aggressive part of the interface is the implementation of the automatic news feed to inform the user on the goings on of their friend list. The feed compiles any changes on the pages of your friends and displays the activity for others to view. The status update serves as a micro-blog which is displayed for your friends to see what you are saying.
To feel like I am really going na główkę I started a Twitter micro-blog, feeding my facebook updates from the very beginning. Twitter accomplishes the same exact task of updating the question, “What are you doing right now?” The innovation is that the feed can be updated through SMS text, which made it possible for many people to post and access the feeds using common mobile devices. The community is not necessarily based on friendship, just interest in reading the other persons micro-blog, and is mostly unrestricted. The advantage of multiple and mobile methods of updating and the simplicity seem to contribute to a fast rate of updates from the dedicated users. Such a micro-blog can be difficult to understand at first, but after some time familiarity builds. Conventions are used to aid communication of thoughts in the 140 character limit. The ‘@’ special character is used to reply to users on the system and the ‘#’ is used to tag messages into categories (defunct since July 10, 2008). To actually accomplish the updating at any useful interval I explored methods of access.
Where is my crackberry? Finding and running applications to access facebook and twitter was 1-2-3. Within minutes I was able to access both services. Now, I was able to update my feeds and upload pictures while on a trip. On the laptop, I downloaded Spaz, a twitter client, and used it for two posts before re-dicovering Ubiquity. Using separate applications and websites for everything we do on the web is actually a waste of time. The Ubiquity, plugin for Firefox, allows me to access my favorite services on the web, like google maps, gmail, and twitter from a command line interface within the browser. This new addition has so much to offer. In some ways analogous to the unix command line, it has the potential to be extremely powerful. The Ubiquity interface is simple and it allows the creation of new commands to extend functionality. The ability to execute web commands using a text interface seems odd at first. The intial cost of time to learn the available commands will be high compared to the select and click of the mouse. Those that take the time to learn and inovate will reap rewards in the long run. Wow, web services could actually be useful.
News headline that caught my attention:
Protein engineering: The fate of fingers
“Proteins with ‘zinc fingers’ designed to bind almost any DNA sequence will soon
be available to any lab that wants them”
The original article describing the methods is:
Rapid “Open-Source” Engineering of Customized Zinc-Finger
Nucleases for Highly Efficient Gene Modification
http://tinyurl.com/4v9wun (from University of Washington) or http://tinyurl.com/4acoql (general site)
There is a lot of articles and posts about Google and Microsoft creating computing clouds. The Amazon and Google Application Engines are supposed to serve as a launching pad into this new atmosphere. The Semantic Web was already imagined to exist as an amorphous network of services all connected using agents of OWL and having an open world. While it is fascinating to think of the Semantic Web as the phantasms of geeks, more of the reality pieces are coming together. From the bottom (or from the top if its all fluffy vapor and dust): the IBM-Google partnership will provide UMD, UW, CMU, MIT, etc. with cloud access in my eyes the hardware necessary to run the chaotic code creations of graduates students. The challenges will begin with this “code”. I am now thinking that each company will create a set of APIs which can utilize this resource, but what will it look like to build the entire stack. Is development to happen on my little laptop, and then i let the code go into the cloud like a balloon? Do I have to learn another bizarre query language, shell script? Will the PIs around me ask and what about the IP? So my piece of the SemWeb can be a CC-GNU GPL, Python or Java, AJAX, OWL-API, GWT, Google App Engine, Google Gears, Google Code Project, OWL, SBML, BioModelsDB, SemSim, BioSimSemRep (BSSR), Novel Contribution to Knowledge, Dissertation stack in the cloud.
Today the news came from mars. They got water. In some crazy way this is the most significant finding since the landing on the moon. While this finding was suspected for some time. It was only now confirmed. There is a incredibly important point in time in the scientific process it is the moment when the results arrive and the answer is known immediately. It’s either bad, as it is most of the time, or it is good. When the results are quote positive, it amazing.
In a single leap of thought, this means everything is possible, again. Now that there is water on mars, we can create bacteria which can survive the rest of the harsh conditions. The bacteria should produce a mixture of gasses which would in turn create an atmosphere. This process, probably called tera forming, can now create a new world. Not, only is this a fantasy, it is one step closer. What other star do we reach for?
The important part is that this shows everyone how amazing goals can be realized. It provides inspiration for me and for others in pursuit of science and something to dream about for us all.
Nature May 1 2008
Well and they thought they had them all, ha!
Darwinian Evolution on a Chip
Brian M. Paegel, Gerald F. Joyce
The Scripps Research Institute and The Skaggs Institute
Computer control of Darwinian evolution has been demonstrated by propagating a population of RNA enzymes in a microfluidic device. The RNA population was challenged to catalyze the ligation of an oligonucleotide substrate under conditions of progressively lower substrate concentrations. A microchip-based serial dilution circuit automated an exponential growth phase followed by a 10-fold dilution, which was repeated for 500 log-growth iterations. Evolution was observed in real time as the population adapted and achieved progressively faster growth rates over time. The final evolved enzyme contained a set of 11 mutations that conferred a 90-fold improvement in substrate utilization, coinciding with the applied selective pressure. This system reduces evolution to a microfluidic algorithm, allowing the
experimenter to observe and manipulate adaptation.
Herbert Sauro: introduced the workshop on the morning of April 26th with emphasis on the social and continuous coffee aspects … Then he passed it on to
Raik: for the task of opening the Biobrick Standardization Keyword summary: modularity, hierarchy of scale, My notes say” Right now we are in between control over…” Endy 2005 [standardization, decoupling, abstraction, open exchange] To get it done the Process is: 2 parties at different locations demonstrate use!. Shaolin! press play to continue. It takes two parties to request a RFC# and the BBF enacts standard.
Silverlab, Freidburg, BioBrick++, BioBrick Extreme (Berkeley)
Big league Legal Issues : patents, licenses.
Kim: Jason Kelly’s results on FACS measurement standard
John: measurements standards using dual luciferase assay
Chris: Standard sets
Lunch: How to get users to annotate parts?
Mac and Jason: Building Registries that dont suck
Raik: (again): Bio Brickit in Django
Jean Peccoud: 225 sequences matched back to ~1000 attempted seqs and total 4,856 entries
Andrew Miller: CellML
Vincent Rouilly: Petri Nets (he has F2620 the only fully characterized BioBrick)
Michael Pedersen: LBS
Michael Hucka: Best Practices and SBML 3
Michael Blinov: BioNETGen
Lucian Smith: Antimony
Sarah Richardson: BioStudio and yeast 2.0
Guillermo: Chassis model and evo
Jonathan Goler: BioJade (full Synthetic biology CAD tool) it works
Jim: Editor of Synth Bio iet.org/synbio
Make golden bricks: Collins/Gardner , Elowitz, Ron Weis
In early April of this year we read Hoyer (2005) for the ethics seminar. While it brings up issues which are tangentially and sometimes completely relevant to the field of Biomedical and Health Informatics: I recognize now, that the viewpoint includes the genetics field genetics as part of the bias. It is hard for me to shed my with the field of genetics, but for the purpose of theoretical exercise I can try. Hoyer’s focus comes from the nature of the data being human genetic information and particularly samples in biobanks. However, informatics, as a field, draws on the experience and uses the lessons of its’ domain fields. Therefore, this article applies as a slice of possible consideration within informatics. The most important point Hoyer makes is that implementation of the policy should be valued more highly then the intention and the “word”. However, the pragmatic point is more effective, which states that ethics policy serves as a shield from the aggressive media attacks which arouse questions of impropriety and flame fear. This pragmatic goal is motivation enough for the sure informatician to develop an ethics policy , which not only guides , but also directs the enterprise. The success of predecessors who have anticipated these issues leads me to believe that the failure can lead to at best stalled projects and at worst public crucifiction. The active ethics policy, written and practiced, should serve the community affected by the research or practice, and at the same time deflate the sceptic. Then of course there may be better way of saying that.
I am exploring the of blogging for the sake of an outlet and organization for thoughts on research topics of interest. Part of the incentive came from the Clinical Informatics in winter 2008, with Dr. David Masuda (who also suggested the WordPress application). The vigorous debate of the Ethics unit led to the creation and my joining the Ethics seminar with David and Dr. Brian Brown in Spring 2008. I am hoping to use this blog to express my thoughts on the subjects brought up in that forum. However, I have a feeling I will explore many other areas of interest. The first step should be to extablish an audience, to direct the writting in form. The other issue will be whether this will be a place for me to just take notes and develop them ad-hoc or to takle specific topics systematically, etc.
The blog is titled theBHIway today, but that may change. There is no specific mission as of today, but some are forming. Most of my wordpress time right now is being spent on the installation issues. I have made the blog only accessible to the authenticated University of Washington community as I am not sure whether it is ready to be released to the greater public. The WordPress version 2.5 came out within the last couple of days so I have to see whether the upgrade is a good Most likely it is as this is the only post. I do have concerns as to whether the http authentication plugin will work with the new version. This reminds me of my gallery of photograph which needs work too. I guess, I should just take the time and do the whole bit. Let’s see what a post looks like on the front page.