Category Archives: Feldman Project
8:00 – 11:00 SR
- Created a timer callback, modified the dataprovider and re-rendered the chart. Seems to be working fine, with no flicker, even with a 100Msec update rate.
11:00 – 1:00 FP
- Meeting with Andrea. Looks like uniqueness can be determined by word choice in a corpus as small as 500. That does make things easier. It also allows for triangulation against other metrics, which would allow for looking at accelerometer data from several body positions for example. Though I’m not sure that’s needed.
- An issue to consider is that people who might use the site would have other examples of their writing. This means that an anonymous source could be identified. As a way around this, a vector-based translation algorithm could be trained using identification code to remove/modify the parts of the user’s language that are identifiable.
- And actually, this means that I could build a simple website that you train once, then write to. The site then transpiles the user’s words and publishes to twitter or Facebook for example. This just addresses the anonymous part of the problem, not the trustworthiness part, but it’s nice low-hanging fruit.
1:00 – 4:00
- Added a datatable, which behaves better. For example, the set() method fires a refresh. Spent the rest of the day trying to get the same behavior with the chart and finally wound up posting on the forums.
8:00 – 10:30 SR
- Paperwork – finally worked out the receipts from the YUIconf trip.
10:00 – 4:00 FP
- Checked to see if learning time made was significant. Nope.
- Make any changes to Ravi’s paper
- Change all references to “normalized” reaction times to “percentage”. Hopefully that will be more clear.
- Cut down paper to 6 (CHI WIP format) and 2 page (Haptics or 3DUI?) versions.
- If the office is open, run ANOVA on the time-to-learn (session 2 – session 1)
8:00 – 11:30 SR
- Meeting with Chris, Lenny and Dong. The upshot is that we now have a “Customer Input” page, which is here.
- Status report to Carla
11:30 – 4:00
- Paper. First draft is done!
8:30 – 1:30 SR
- Ungodly traffic this morning.
- Deployed new FA and RA, plus some new scripts.
- Cognos data is wrong in that amounts in excess of the budget are being calculated from expenditures. The >100% rule might fix this, but I’m worried that in may need to be more sophisticated.
- Did a Cognos upload of 1300 records with logging turned on that really slowed down Tomcat, including things like logging in. Accessing the published data REST servlet on the server worked fine though. Not sure what the deal is. I turned off logging to the DB and restarted the server, which sped things up. We need to try another Cognos ingest to see what happens. Lenny did one this morning that took a fraction of the time of the later one.
- At the very least, we’re going to need to deal with user reactions to an apparently hung system with a dialog or something.
- Burning September status to disk.
1:30 – 4:30 FP
- Done with table. Back to writing.
8:00 – 10:30 SR
- Requested Dong’s pass. Done.
- Working on sysadm paperwork.
10:30 – FP
- Need to put together grid of results by paper for lit review
8:00 – 4:00 FP
- Basically banging away on the paper. Abstract, Introduction, Previous Work, Experiment are done. About halfway through results. Might finish first draft on Monday?
8:00 – 9:00 SR
- Checking to see if Lenny can come over and help, since he’s stuck at home.
- Doing some research into the new Flash applications
9:00 – 4:00 FP
- Working on the experimental methods
- Nice articles on when to use ANOVA
- Banging away on the paper. Five pages in on the first draft. Abstract, Introduction, related work, Experiment, and Apparatus.
8:30 – 10:30 SR
- Working out a plan with Dong about what to do with our time. I’m guessing that the shutdown will last until at least Oct 17. The following is for the rest of this week:
Fix Script to add EA Cognos data plus autofill (Obligations_outlays)
enter Contract Number in Invoice Entry (RA)
allow commas, periods in data entry (RA)
add Invoice Viewer in RA for the selected Req
- Get all code up and running on FGMDEV
- Drop maven and go to eclipse-based projects
Default combobox capability.
Create an “table_errors” table that has the application, user, date, time, query and error message, and take out the “Mail to Admin note”
- For next week, add #include for python module storage and assembly.
10:30 – 4:30 FP
- Finished Annotated Bibliography. Experimental design is next.
8:00 – 4:00 FP
- I’m guessing this is what I charge to for a while
- In meeting with Dr. Kuber, I brought up something that I’ve been thinking about since the weekend. The interface works, provably so. The pilot study shows that it can be used for (a) training and (b) “useful” work. If the goal is to produce “blue collar telecommuting”, then the question becomes, how do we actually achieve that? A dumb master-slave system makes very little sense for a few reasons:
- Time lag. It may not be possible to always get a fast enough response loop to make haptics work well
- Machine intelligence. With robots coming online like Baxter, there is certainly some level of autonomy that the on-site robot can perform. So, what’s a good human-robot synergy?
- I’m thinking that a hybrid virtual/physical interface might be interesting.
- The robotic workcell is constantly scanned and digitized by cameras. The data is then turned into models of the items that the robot is to work with.
- These items are rendered locally to the operator, who manipulates the virtual objects using tight-loop haptics, 3D graphics, etc. Since (often?) the space is well known, the objects can be rendered from a library of CAD-correct parts.
- The operator manipulates the virtual objects. The robot follows the “path” laid down by the operator. The position and behavior of the actual robot is represented in some way (ghost image, warning bar, etc). This is known as Mediated Teleoperation, and described nicely in this paper.
- The novel part, at least as far as I can determine at this point is using mediated telepresence to train a robot in a task:
- The operator can instruct the robot to learn some or all of a particular procedure. This probably entails setting entry, exit, and error conditions for tasks, which the operator is able to create on the local workstation.
- It is reasonable to expect that in many cases, this sort of work will be a mix of manual control and automated behavior. For example, placing of a part may be manual, but screwing a bolt into place to a particular torque could be entirely automatic. If a robot’s behavior is made fully autonomous, the operator needs simply to monitor the system for errors or non-optimal behavior. At that point, the operator could engage another robot and repeat the above process.
- User interfaces that inform the operator when the robot is coming out of autonomous modes in a seamless way need to be explored.
8:00 – 11:30 SR
- Parent projects can only be created if there are no REQ’s. If there are, pop up a dialog that says REQ’s must be eliminated. A parent project has no visible REQ tab.
- Add table_combobox_defaults with 3 columns 1) Editable, 2) Default value, 3) Table name
- Unresolved discussion about REQ tracking discussion with Lenny.
11:30 – FP